Linux-Utility
stringlengths 1
30
| Manual-Page
stringlengths 700
948k
| TLDR-Summary
stringlengths 110
2.05k
|
---|---|---|
ac | ac(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training ac(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | FILES | AUTHOR | SEE ALSO | COLOPHON AC(1) General Commands Manual AC(1) NAME top ac - print statistics about users' connect time SYNOPSIS top ac [ -d | --daily-totals ] [ -y | --print-year ] [ -p | --individual-totals ] [ people ] [ -f | --file filename ] [ -a | --all-days ] [ --complain ] [ --reboots ] [ --supplants ] [ --timewarps ] [ --compatibility ] [ --tw-leniency num ] [ --tw-suspicious num ] [ -z | --print-zeros ] [ --debug ] [ -V | --version ] [ -h | --help ] DESCRIPTION top ac prints out a report of connect time (in hours) based on the logins/logouts in the current wtmp file. A total is also printed out. The accounting file wtmp is maintained by init(8) and login(1). Neither ac nor login creates the wtmp if it doesn't exist, no accounting is done. To begin accounting, create the file with a length of zero. NOTE: The wtmp file can get really big, really fast. You might want to trim it every once and a while. GNU ac works nearly the same UNIX ac, though it's a little smarter in several ways. You should therefore expect differences in the output of GNU ac and the output of ac's on other systems. Use the command info accounting to get additional information. OPTIONS top -d, --daily-totals Print totals for each day rather than just one big total at the end. The output looks like this: Jul 3 total 1.17 Jul 4 total 2.10 Jul 5 total 8.23 Jul 6 total 2.10 Jul 7 total 0.30 -p, --individual-totals Print time totals for each user in addition to the usual everything-lumped-into-one value. It looks like: bob 8.06 goff 0.60 maley 7.37 root 0.12 total 16.15 people Print out the sum total of the connect time used by all of the users included in people. Note that people is a space separated list of valid user names; wildcards are not allowed. -f, --file filename Read from the file filename instead of the system's wtmp file. --complain When the wtmp file has a problem (a time-warp, missing record, or whatever), print out an appropriate error. --reboots Reboot records are NOT written at the time of a reboot, but when the system restarts; therefore, it is impossible to know exactly when the reboot occurred. Users may have been logged into the system at the time of the reboot, and many ac's automatically count the time between the login and the reboot record against the user (even though all of that time shouldn't be, perhaps, if the system is down for a long time, for instance). If you want to count this time, include the flag. *For vanilla ac compatibility, include this flag.* --supplants Sometimes, a logout record is not written for a specific terminal, so the time that the last user accrued cannot be calculated. If you want to include the time from the user's login to the next login on the terminal (though probably incorrect), include this you want to include the time from the user's login to the next login on the terminal (though probably incorrect), include this flag. *For vanilla ac compatibility, include this flag.* --timewarps Sometimes, entries in a wtmp file will suddenly jump back into the past without a clock change record occurring. It is impossible to know how long a user was logged in when this occurs. If you want to count the time between the login and the time warp against the user, include this flag. *For vanilla ac compatibility, include this flag.* --compatibility This is shorthand for typing out the three above options. -a, --all-days If we're printing daily totals, print a record for every day instead of skipping intervening days where there is no login activity. Without this flag, time accrued during those intervening days gets listed under the next day where there is login activity. --tw-leniency num Set the time warp leniency to num seconds. Records in wtmp files might be slightly out of order (most notably when two logins occur within a one-second period - the second one gets written first). By default, this value is set to 60. If the program notices this problem, time is not assigned to users unless the --timewarps flag is used. --tw-suspicious num Set the time warp suspicious value to num seconds. If two records in the wtmp file are farther than this number of seconds apart, there is a problem with the wtmp file (or your machine hasn't been used in a year). If the program notices this problem, time is not assigned to users unless the --timewarps flag is used. -y, --print-year Print year when displaying dates. -z, --print-zeros If a total for any category (save the grand total) is zero, print it. The default is to suppress printing. --debug Print verbose internal information. -V, --version Print the version number of ac to standard output and quit. -h, --help Prints the usage string and default locations of system files to standard output and exits. FILES top wtmp The system wide login record file. See wtmp(5) for further details. AUTHOR top The GNU accounting utilities were written by Noel Cragg <noel@gnu.ai.mit.edu>. The man page was adapted from the accounting texinfo page by Susan Kleinmann <sgk@sgk.tiac.net>. SEE ALSO top login(1), wtmp(5), init(8), sa(8) COLOPHON top This page is part of the psacct (process accounting utilities) project. Information about the project can be found at http://www.gnu.org/software/acct/. If you have a bug report for this manual page, see http://www.gnu.org/software/acct/. This page was obtained from the tarball acct-6.6.4.tar.gz fetched from http://ftp.gnu.org/gnu/acct/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org 2010 August 16 AC(1) Pages that refer to this page: utmp(5), accton(8), sa(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # ac\n\n> Print statistics on how long users have been connected.\n> More information: <https://www.gnu.org/software/acct/manual/accounting.html#ac>.\n\n- Print how long the current user has been connected in hours:\n\n`ac`\n\n- Print how long users have been connected in hours:\n\n`ac --individual-totals`\n\n- Print how long a particular user has been connected in hours:\n\n`ac --individual-totals {{username}}`\n\n- Print how long a particular user has been connected in hours per day (with total):\n\n`ac --daily-totals --individual-totals {{username}}`\n\n- Also display additional details:\n\n`ac --compatibility`\n |
addpart | addpart(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training addpart(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | PARAMETERS | SEE ALSO | REPORTING BUGS | AVAILABILITY ADDPART(8) System Administration ADDPART(8) NAME top addpart - tell the kernel about the existence of a partition SYNOPSIS top addpart device partition start length DESCRIPTION top addpart tells the Linux kernel about the existence of the specified partition. The command is a simple wrapper around the "add partition" ioctl. This command doesnt manipulate partitions on a block device. PARAMETERS top device The disk device. partition The partition number. start The beginning of the partition (in 512-byte sectors). length The length of the partition (in 512-byte sectors). -h, --help Display help text and exit. -V, --version Print version and exit. SEE ALSO top delpart(8), fdisk(8), parted(8), partprobe(8), partx(8) REPORTING BUGS top For bug reports, use the issue tracker at https://github.com/util-linux/util-linux/issues. AVAILABILITY top The addpart command is part of the util-linux package which can be downloaded from Linux Kernel Archive <https://www.kernel.org/pub/linux/utils/util-linux/>. This page is part of the util-linux (a random collection of Linux utilities) project. Information about the project can be found at https://www.kernel.org/pub/linux/utils/util-linux/. If you have a bug report for this manual page, send it to util-linux@vger.kernel.org. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/utils/util-linux/util-linux.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-14.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org util-linux 2.39.594-1e0ad 2023-07-19 ADDPART(8) Pages that refer to this page: delpart(8), partx(8), resizepart(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # addpart\n\n> Tell the Linux kernel about the existence of the specified partition.\n> A simple wrapper around the `add partition` ioctl.\n> More information: <https://manned.org/addpart>.\n\n- Tell the kernel about the existence of the specified partition:\n\n`addpart {{device}} {{partition}} {{start}} {{length}}`\n |
addr2line | addr2line(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training addr2line(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | SEE ALSO | COPYRIGHT | COLOPHON ADDR2LINE(1) GNU Development Tools ADDR2LINE(1) NAME top addr2line - convert addresses or symbol+offset into file names and line numbers SYNOPSIS top addr2line [-a|--addresses] [-b bfdname|--target=bfdname] [-C|--demangle[=style]] [-r|--no-recurse-limit] [-R|--recurse-limit] [-e filename|--exe=filename] [-f|--functions] [-s|--basename] [-i|--inlines] [-p|--pretty-print] [-j|--section=name] [-H|--help] [-V|--version] [addr addr ...] DESCRIPTION top addr2line translates addresses or symbol+offset into file names and line numbers. Given an address or symbol+offset in an executable or an offset in a section of a relocatable object, it uses the debugging information to figure out which file name and line number are associated with it. The executable or relocatable object to use is specified with the -e option. The default is the file a.out. The section in the relocatable object to use is specified with the -j option. addr2line has two modes of operation. In the first, hexadecimal addresses or symbol+offset are specified on the command line, and addr2line displays the file name and line number for each address. In the second, addr2line reads hexadecimal addresses or symbol+offset from standard input, and prints the file name and line number for each address on standard output. In this mode, addr2line may be used in a pipe to convert dynamically chosen addresses. The format of the output is FILENAME:LINENO. By default each input address generates one line of output. Two options can generate additional lines before each FILENAME:LINENO line (in that order). If the -a option is used then a line with the input address is displayed. If the -f option is used, then a line with the FUNCTIONNAME is displayed. This is the name of the function containing the address. One option can generate additional lines after the FILENAME:LINENO line. If the -i option is used and the code at the given address is present there because of inlining by the compiler then additional lines are displayed afterwards. One or two extra lines (if the -f option is used) are displayed for each inlined function. Alternatively if the -p option is used then each input address generates a single, long, output line containing the address, the function name, the file name and the line number. If the -i option has also been used then any inlined functions will be displayed in the same manner, but on separate lines, and prefixed by the text (inlined by). If the file name or function name can not be determined, addr2line will print two question marks in their place. If the line number can not be determined, addr2line will print 0. When symbol+offset is used, +offset is optional, except when the symbol is ambigious with a hex number. The resolved symbols can be mangled or unmangled, except unmangled symbols with + are not allowed. OPTIONS top The long and short forms of options, shown here as alternatives, are equivalent. -a --addresses Display the address before the function name, file and line number information. The address is printed with a 0x prefix to easily identify it. -b bfdname --target=bfdname Specify that the object-code format for the object files is bfdname. -C --demangle[=style] Decode (demangle) low-level symbol names into user-level names. Besides removing any initial underscore prepended by the system, this makes C++ function names readable. Different compilers have different mangling styles. The optional demangling style argument can be used to choose an appropriate demangling style for your compiler. -e filename --exe=filename Specify the name of the executable for which addresses should be translated. The default file is a.out. -f --functions Display function names as well as file and line number information. -s --basenames Display only the base of each file name. -i --inlines If the address belongs to a function that was inlined, the source information for all enclosing scopes back to the first non-inlined function will also be printed. For example, if "main" inlines "callee1" which inlines "callee2", and address is from "callee2", the source information for "callee1" and "main" will also be printed. -j --section Read offsets relative to the specified section instead of absolute addresses. -p --pretty-print Make the output more human friendly: each location are printed on one line. If option -i is specified, lines for all enclosing scopes are prefixed with (inlined by). -r -R --recurse-limit --no-recurse-limit --recursion-limit --no-recursion-limit Enables or disables a limit on the amount of recursion performed whilst demangling strings. Since the name mangling formats allow for an infinite level of recursion it is possible to create strings whose decoding will exhaust the amount of stack space available on the host machine, triggering a memory fault. The limit tries to prevent this from happening by restricting recursion to 2048 levels of nesting. The default is for this limit to be enabled, but disabling it may be necessary in order to demangle truly complicated names. Note however that if the recursion limit is disabled then stack exhaustion is possible and any bug reports about such an event will be rejected. The -r option is a synonym for the --no-recurse-limit option. The -R option is a synonym for the --recurse-limit option. Note this option is only effective if the -C or --demangle option has been enabled. @file Read command-line options from file. The options read are inserted in place of the original @file option. If file does not exist, or cannot be read, then the option will be treated literally, and not removed. Options in file are separated by whitespace. A whitespace character may be included in an option by surrounding the entire option in either single or double quotes. Any character (including a backslash) may be included by prefixing the character to be included with a backslash. The file may itself contain additional @file options; any such options will be processed recursively. SEE ALSO top Info entries for binutils. COPYRIGHT top Copyright (c) 1991-2023 Free Software Foundation, Inc. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with no Invariant Sections, with no Front-Cover Texts, and with no Back-Cover Texts. A copy of the license is included in the section entitled "GNU Free Documentation License". COLOPHON top This page is part of the binutils (a collection of tools for working with executable binaries) project. Information about the project can be found at http://www.gnu.org/software/binutils/. If you have a bug report for this manual page, see http://sourceware.org/bugzilla/enter_bug.cgi?product=binutils. This page was obtained from the tarball binutils-2.41.tar.gz fetched from https://ftp.gnu.org/gnu/binutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org binutils-2.41 2023-12-22 ADDR2LINE(1) Pages that refer to this page: backtrace(3) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # addr2line\n\n> Convert addresses of a binary into file names and line numbers.\n> More information: <https://manned.org/addr2line>.\n\n- Display the filename and line number of the source code from an instruction address of an executable:\n\n`addr2line --exe={{path/to/executable}} {{address}}`\n\n- Display the function name, filename and line number:\n\n`addr2line --exe={{path/to/executable}} --functions {{address}}`\n\n- Demangle the function name for C++ code:\n\n`addr2line --exe={{path/to/executable}} --functions --demangle {{address}}`\n |
agetty | agetty(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training agetty(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | ARGUMENTS | OPTIONS | CONFIG FILE ITEMS | EXAMPLE | SECURITY NOTICE | ISSUE FILES | FILES | CREDENTIALS | BUGS | DIAGNOSTICS | AUTHORS | REPORTING BUGS | AVAILABILITY AGETTY(8) System Administration AGETTY(8) NAME top agetty - alternative Linux getty SYNOPSIS top agetty [options] port [baud_rate...] [term] DESCRIPTION top agetty opens a tty port, prompts for a login name and invokes the /bin/login command. It is normally invoked by init(8). agetty has several non-standard features that are useful for hardwired and for dial-in lines: Adapts the tty settings to parity bits and to erase, kill, end-of-line and uppercase characters when it reads a login name. The program can handle 7-bit characters with even, odd, none or space parity, and 8-bit characters with no parity. The following special characters are recognized: Control-U (kill); DEL and backspace (erase); carriage return and line feed (end of line). See also the --erase-chars and --kill-chars options. Optionally deduces the baud rate from the CONNECT messages produced by Hayes(tm)-compatible modems. Optionally does not hang up when it is given an already opened line (useful for call-back applications). Optionally does not display the contents of the /etc/issue file. Optionally displays an alternative issue files or directories instead of /etc/issue or /etc/issue.d. Optionally does not ask for a login name. Optionally invokes a non-standard login program instead of /bin/login. Optionally turns on hardware flow control. Optionally forces the line to be local with no need for carrier detect. This program does not use the /etc/gettydefs (System V) or /etc/gettytab (SunOS 4) files. ARGUMENTS top port A path name relative to the /dev directory. If a "-" is specified, agetty assumes that its standard input is already connected to a tty port and that a connection to a remote user has already been established. Under System V, a "-" port argument should be preceded by a "--". baud_rate,... A comma-separated list of one or more baud rates. Each time agetty receives a BREAK character it advances through the list, which is treated as if it were circular. Baud rates should be specified in descending order, so that the null character (Ctrl-@) can also be used for baud-rate switching. This argument is optional and unnecessary for virtual terminals. The default for serial terminals is keep the current baud rate (see --keep-baud) and if unsuccessful then default to '9600'. term The value to be used for the TERM environment variable. This overrides whatever init(1) may have set, and is inherited by login and the shell. The default is 'vt100', or 'linux' for Linux on a virtual terminal, or 'hurd' for GNU Hurd on a virtual terminal. OPTIONS top -8, --8bits Assume that the tty is 8-bit clean, hence disable parity detection. -a, --autologin username Automatically log in the specified user without asking for a username or password. Using this option causes an -f username option and argument to be added to the /bin/login command line. See --login-options, which can be used to modify this options behavior. Note that --autologin may affect the way in which getty initializes the serial line, because on auto-login agetty does not read from the line and it has no opportunity optimize the line setting. -c, --noreset Do not reset terminal cflags (control modes). See termios(3) for more details. -E, --remote Typically the login(1) command is given a remote hostname when called by something such as telnetd(8). This option allows agetty to pass what it is using for a hostname to login(1) for use in utmp(5). See --host, login(1), and utmp(5). If the --host fakehost option is given, then an -h fakehost option and argument are added to the /bin/login command line. If the --nohostname option is given, then an -H option is added to the /bin/login command line. See --login-options. -f, --issue-file path Specifies a ":" delimited list of files and directories to be displayed instead of /etc/issue (or other). All specified files and directories are displayed, missing or empty files are silently ignored. If the specified path is a directory then display all files with .issue file extension in version-sort order from the directory. This allows custom messages to be displayed on different terminals. The --noissue option will override this option. --show-issue Display the current issue file (or other) on the current terminal and exit. Use this option to review the current setting, it is not designed for any other purpose. Note that output may use some default or incomplete information as proper output depends on terminal and agetty command line. -h, --flow-control Enable hardware (RTS/CTS) flow control. It is left up to the application to disable software (XON/XOFF) flow protocol where appropriate. -H, --host fakehost Write the specified fakehost into the utmp file. Normally, no login host is given, since agetty is used for local hardwired connections and consoles. However, this option can be useful for identifying terminal concentrators and the like. -i, --noissue Do not display the contents of /etc/issue (or other) before writing the login prompt. Terminals or communications hardware may become confused when receiving lots of text at the wrong baud rate; dial-up scripts may fail if the login prompt is preceded by too much text. -I, --init-string initstring Set an initial string to be sent to the tty or modem before sending anything else. This may be used to initialize a modem. Non-printable characters may be sent by writing their octal code preceded by a backslash (\). For example, to send a linefeed character (ASCII 10, octal 012), write \12. -J, --noclear Do not clear the screen before prompting for the login name. By default the screen is cleared. -l, --login-program login_program Invoke the specified login_program instead of /bin/login. This allows the use of a non-standard login program. Such a program could, for example, ask for a dial-up password or use a different password file. See --login-options. -L, --local-line[=mode] Control the CLOCAL line flag. The optional mode argument is auto, always or never. If the mode argument is omitted, then the default is always. If the --local-line option is not given at all, then the default is auto. always Forces the line to be a local line with no need for carrier detect. This can be useful when you have a locally attached terminal where the serial line does not set the carrier-detect signal. never Explicitly clears the CLOCAL flag from the line setting and the carrier-detect signal is expected on the line. auto The agetty default. Does not modify the CLOCAL setting and follows the setting enabled by the kernel. -m, --extract-baud Try to extract the baud rate from the CONNECT status message produced by Hayes(tm)-compatible modems. These status messages are of the form: "<junk><speed><junk>". agetty assumes that the modem emits its status message at the same speed as specified with (the first) baud_rate value on the command line. Since the --extract-baud feature may fail on heavily-loaded systems, you still should enable BREAK processing by enumerating all expected baud rates on the command line. --list-speeds Display supported baud rates. These are determined at compilation time. -n, --skip-login Do not prompt the user for a login name. This can be used in connection with the --login-program option to invoke a non-standard login process such as a BBS system. Note that with the --skip-login option, agetty gets no input from the user who logs in and therefore will not be able to figure out parity, character size, and newline processing of the connection. It defaults to space parity, 7 bit characters, and ASCII CR (13) end-of-line character. Beware that the program that agetty starts (usually /bin/login) is run as root. -N, --nonewline Do not print a newline before writing out /etc/issue. -o, --login-options login_options Options and arguments that are passed to login(1). Where \u is replaced by the login name. For example: --login-options '-h darkstar -- \u' See --autologin, --login-program and --remote. Please read the SECURITY NOTICE below before using this option. -p, --login-pause Wait for any key before dropping to the login prompt. Can be combined with --autologin to save memory by lazily spawning shells. -r, --chroot directory Change root to the specified directory. -R, --hangup Call vhangup(2) to do a virtual hangup of the specified terminal. -s, --keep-baud Try to keep the existing baud rate. The baud rates from the command line are used when agetty receives a BREAK character. If another baud rates specified then the original baud rate is also saved to the end of the wanted baud rates list. This can be used to return to the original baud rate after unexpected BREAKs. -t, --timeout timeout Terminate if no user name could be read within timeout seconds. Use of this option with hardwired terminal lines is not recommended. -U, --detect-case Turn on support for detecting an uppercase-only terminal. This setting will detect a login name containing only capitals as indicating an uppercase-only terminal and turn on some upper-to-lower case conversions. Note that this has no support for any Unicode characters. -w, --wait-cr Wait for the user or the modem to send a carriage-return or a linefeed character before sending the /etc/issue file (or others) and the login prompt. This is useful with the --init-string option. --nohints Do not print hints about Num, Caps and Scroll Locks. --nohostname By default the hostname will be printed. With this option enabled, no hostname at all will be shown. This setting is also possible to able by LOGIN_PLAIN_PROMPT option in the /etc/login.defs configuration file (see below for more details). --long-hostname By default the hostname is only printed until the first dot. With this option enabled, the fully qualified hostname by gethostname(3P) or (if not found) by getaddrinfo(3) is shown. --erase-chars string This option specifies additional characters that should be interpreted as a backspace ("ignore the previous character") when the user types the login name. The default additional 'erase' has been '#', but since util-linux 2.23 no additional erase characters are enabled by default. --kill-chars string This option specifies additional characters that should be interpreted as a kill ("ignore all previous characters") when the user types the login name. The default additional 'kill' has been '@', but since util-linux 2.23 no additional kill characters are enabled by default. --chdir directory Change directory before the login. --delay number Sleep seconds before open tty. --nice number Run login with this priority. --reload Ask all running agetty instances to reload and update their displayed prompts, if the user has not yet commenced logging in. After doing so the command will exit. This feature might be unsupported on systems without Linux inotify(7). -h, --help Display help text and exit. -V, --version Print version and exit. CONFIG FILE ITEMS top agetty reads the /etc/login.defs configuration file (see login.defs(5)). Note that the configuration file could be distributed with another package (usually shadow-utils). The following configuration items are relevant for agetty: LOGIN_PLAIN_PROMPT (boolean) Tell agetty that printing the hostname should be suppressed in the login: prompt. This is an alternative to the --nohostname command line option. The default value is no. EXAMPLE top This section shows examples for the process field of an entry in the /etc/inittab file. Youll have to prepend appropriate values for the other fields. See inittab(5) for more details. For a hardwired line or a console tty: /sbin/agetty 9600 ttyS1 For a directly connected terminal without proper carrier-detect wiring (try this if your terminal just sleeps instead of giving you a password: prompt): /sbin/agetty --local-line 9600 ttyS1 vt100 For an old-style dial-in line with a 9600/2400/1200 baud modem: /sbin/agetty --extract-baud --timeout 60 ttyS1 9600,2400,1200 For a Hayes modem with a fixed 115200 bps interface to the machine (the example init string turns off modem echo and result codes, makes modem/computer DCD track modem/modem DCD, makes a DTR drop cause a disconnection, and turns on auto-answer after 1 ring): /sbin/agetty --wait-cr --init-string 'ATE0Q1&D2&C1S0=1\015' 115200 ttyS1 SECURITY NOTICE top If you use the --login-program and --login-options options, be aware that a malicious user may try to enter lognames with embedded options, which then get passed to the used login program. agetty does check for a leading "-" and makes sure the logname gets passed as one parameter (so embedded spaces will not create yet another parameter), but depending on how the login binary parses the command line that might not be sufficient. Check that the used login program cannot be abused this way. Some programs use "--" to indicate that the rest of the command line should not be interpreted as options. Use this feature if available by passing "--" before the username gets passed by \u. ISSUE FILES top The default issue file is /etc/issue. If the file exists, then agetty also checks for /etc/issue.d directory. The directory is optional extension to the default issue file and content of the directory is printed after /etc/issue content. If the /etc/issue does not exist, then the directory is ignored. All files with .issue extension from the directory are printed in version-sort order. The directory can be used to maintain 3rd-party messages independently on the primary system /etc/issue file. Since version 2.35 additional locations for issue file and directory are supported. If the default /etc/issue does not exist, then agetty checks for /run/issue and /run/issue.d, thereafter for /usr/lib/issue and /usr/lib/issue.d. The directory /etc is expected for host specific configuration, /run is expected for generated stuff and /usr/lib for static distribution maintained configuration. The default path maybe overridden by --issue-file option. In this case specified path has to be file or directory and all the default issue file and directory locations are ignored. The issue file feature can be completely disabled by --noissue option. It is possible to review the current issue file by agetty --show-issue on the current terminal. The issue files may contain certain escape codes to display the system name, date, time et cetera. All escape codes consist of a backslash (\) immediately followed by one of the characters listed below. 4 or 4{interface} Insert the IPv4 address of the specified network interface (for example: \4{eth0}). If the interface argument is not specified, then select the first fully configured (UP, non-LOCALBACK, RUNNING) interface. If no configured interface is found, fall back to the IP address of the machines hostname. 6 or 6{interface} The same as \4 but for IPv6. b Insert the baudrate of the current line. d Insert the current date. e or e{name} Translate the human-readable name to an escape sequence and insert it (for example: \e{red}Alert text.\e{reset}). If the name argument is not specified, then insert \033. The currently supported names are: black, blink, blue, bold, brown, cyan, darkgray, gray, green, halfbright, lightblue, lightcyan, lightgray, lightgreen, lightmagenta, lightred, magenta, red, reset, reverse, yellow and white. All unknown names are silently ignored. s Insert the system name (the name of the operating system). Same as 'uname -s'. See also the \S escape code. S or S{VARIABLE} Insert the VARIABLE data from /etc/os-release. If this file does not exist then fall back to /usr/lib/os-release. If the VARIABLE argument is not specified, then use PRETTY_NAME from the file or the system name (see \s). This escape code can be used to keep /etc/issue distribution and release independent. Note that \S{ANSI_COLOR} is converted to the real terminal escape sequence. l Insert the name of the current tty line. m Insert the architecture identifier of the machine. Same as uname -m. n Insert the nodename of the machine, also known as the hostname. Same as uname -n. o Insert the NIS domainname of the machine. Same as hostname -d. O Insert the DNS domainname of the machine. r Insert the release number of the OS. Same as uname -r. t Insert the current time. u Insert the number of current users logged in. U Insert the string "1 user" or "<n> users" where <n> is the number of current users logged in. v Insert the version of the OS, that is, the build-date and such. An example. On my system, the following /etc/issue file: This is \n.\o (\s \m \r) \t displays as: This is thingol.orcan.dk (Linux i386 1.1.9) 18:29:30 FILES top /var/run/utmp the system status file. /etc/issue printed before the login prompt. /etc/os-release /usr/lib/os-release operating system identification data. /dev/console problem reports (if syslog(3) is not used). /etc/inittab init(8) configuration file for SysV-style init daemon. CREDENTIALS top agetty supports configuration via systemd credentials (see https://systemd.io/CREDENTIALS/). agetty reads the following systemd credentials: agetty.autologin (string) If set, configures agetty to automatically log in the specified user without asking for a username or password, similarly to the --autologin option. BUGS top The baud-rate detection feature (the --extract-baud option) requires that agetty be scheduled soon enough after completion of a dial-in call (within 30 ms with modems that talk at 2400 baud). For robustness, always use the --extract-baud option in combination with a multiple baud rate command-line argument, so that BREAK processing is enabled. The text in the /etc/issue file (or other) and the login prompt are always output with 7-bit characters and space parity. The baud-rate detection feature (the --extract-baud option) requires that the modem emits its status message after raising the DCD line. DIAGNOSTICS top Depending on how the program was configured, all diagnostics are written to the console device or reported via the syslog(3) facility. Error messages are produced if the port argument does not specify a terminal device; if there is no utmp entry for the current process (System V only); and so on. AUTHORS top Werner Fink <werner@suse.de>, Karel Zak <kzak@redhat.com> The original agetty for serial terminals was written by W.Z. Venema <wietse@wzv.win.tue.nl> and ported to Linux by Peter Orbaek <poe@daimi.aau.dk>. REPORTING BUGS top For bug reports, use the issue tracker at https://github.com/util-linux/util-linux/issues. AVAILABILITY top The agetty command is part of the util-linux package which can be downloaded from Linux Kernel Archive <https://www.kernel.org/pub/linux/utils/util-linux/>. This page is part of the util-linux (a random collection of Linux utilities) project. Information about the project can be found at https://www.kernel.org/pub/linux/utils/util-linux/. If you have a bug report for this manual page, send it to util-linux@vger.kernel.org. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/utils/util-linux/util-linux.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-14.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org util-linux 2.39.594-1e0ad 2023-07-19 AGETTY(8) Pages that refer to this page: tty(4), ttyS(4), issue(5), systemd.exec(5), ttytype(5), utmp(5), systemd.system-credentials(7), systemd-getty-generator(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # agetty\n\n> Alternative `getty`: Open a `tty` port, prompt for a login name, and invoke the `/bin/login` command.\n> It is normally invoked by `init`.\n> Note: the baud rate is the speed of data transfer between a terminal and a device over a serial connection.\n> More information: <https://manned.org/agetty>.\n\n- Connect `stdin` to a port (relative to `/dev`) and optionally specify a baud rate (defaults to 9600):\n\n`agetty {{tty}} {{115200}}`\n\n- Assume `stdin` is already connected to a `tty` and set a [t]imeout for the login:\n\n`agetty {{-t|--timeout}} {{timeout_in_seconds}} -`\n\n- Assume the `tty` is [8]-bit, overriding the `TERM` environment variable set by `init`:\n\n`agetty -8 - {{term_var}}`\n\n- Skip the login ([n]o login) and invoke, as root, another [l]ogin program instead of `/bin/login`:\n\n`agetty {{-n|--skip-login}} {{-l|--login-program}} {{login_program}} {{tty}}`\n\n- Do not display the pre-login ([i]ssue) file (`/etc/issue` by default) before writing the login prompt:\n\n`agetty {{-i|--noissue}} -`\n\n- Change the [r]oot directory and write a specific fake [H]ost into the `utmp` file:\n\n`agetty {{-r|--chroot}} {{/path/to/root_directory}} {{-H|--host}} {{fake_host}} -`\n |
alias | alias(1p) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training alias(1p) Linux manual page PROLOG | NAME | SYNOPSIS | DESCRIPTION | OPTIONS | OPERANDS | STDIN | INPUT FILES | ENVIRONMENT VARIABLES | ASYNCHRONOUS EVENTS | STDOUT | STDERR | OUTPUT FILES | EXTENDED DESCRIPTION | EXIT STATUS | CONSEQUENCES OF ERRORS | APPLICATION USAGE | EXAMPLES | RATIONALE | FUTURE DIRECTIONS | SEE ALSO | COPYRIGHT ALIAS(1P) POSIX Programmer's Manual ALIAS(1P) PROLOG top This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux. NAME top alias define or display aliases SYNOPSIS top alias [alias-name[=string]...] DESCRIPTION top The alias utility shall create or redefine alias definitions or write the values of existing alias definitions to standard output. An alias definition provides a string value that shall replace a command name when it is encountered; see Section 2.3.1, Alias Substitution. An alias definition shall affect the current shell execution environment and the execution environments of the subshells of the current shell. When used as specified by this volume of POSIX.12017, the alias definition shall not affect the parent process of the current shell nor any utility environment invoked by the shell; see Section 2.12, Shell Execution Environment. OPTIONS top None. OPERANDS top The following operands shall be supported: alias-name Write the alias definition to standard output. alias-name=string Assign the value of string to the alias alias-name. If no operands are given, all alias definitions shall be written to standard output. STDIN top Not used. INPUT FILES top None. ENVIRONMENT VARIABLES top The following environment variables shall affect the execution of alias: LANG Provide a default value for the internationalization variables that are unset or null. (See the Base Definitions volume of POSIX.12017, Section 8.2, Internationalization Variables for the precedence of internationalization variables used to determine the values of locale categories.) LC_ALL If set to a non-empty string value, override the values of all the other internationalization variables. LC_CTYPE Determine the locale for the interpretation of sequences of bytes of text data as characters (for example, single-byte as opposed to multi-byte characters in arguments). LC_MESSAGES Determine the locale that should be used to affect the format and contents of diagnostic messages written to standard error. NLSPATH Determine the location of message catalogs for the processing of LC_MESSAGES. ASYNCHRONOUS EVENTS top Default. STDOUT top The format for displaying aliases (when no operands or only name operands are specified) shall be: "%s=%s\n", name, value The value string shall be written with appropriate quoting so that it is suitable for reinput to the shell. See the description of shell quoting in Section 2.2, Quoting. STDERR top The standard error shall be used only for diagnostic messages. OUTPUT FILES top None. EXTENDED DESCRIPTION top None. EXIT STATUS top The following exit values shall be returned: 0 Successful completion. >0 One of the name operands specified did not have an alias definition, or an error occurred. CONSEQUENCES OF ERRORS top Default. The following sections are informative. APPLICATION USAGE top None. EXAMPLES top 1. Create a short alias for a commonly used ls command: alias lf="ls -CF" 2. Create a simple ``redo'' command to repeat previous entries in the command history file: alias r='fc -s' 3. Use 1K units for du: alias du=du\ -k 4. Set up nohup so that it can deal with an argument that is itself an alias name: alias nohup="nohup " RATIONALE top The alias description is based on historical KornShell implementations. Known differences exist between that and the C shell. The KornShell version was adopted to be consistent with all the other KornShell features in this volume of POSIX.12017, such as command line editing. Since alias affects the current shell execution environment, it is generally provided as a shell regular built-in. Historical versions of the KornShell have allowed aliases to be exported to scripts that are invoked by the same shell. This is triggered by the alias -x flag; it is allowed by this volume of POSIX.12017 only when an explicit extension such as -x is used. The standard developers considered that aliases were of use primarily to interactive users and that they should normally not affect shell scripts called by those users; functions are available to such scripts. Historical versions of the KornShell had not written aliases in a quoted manner suitable for reentry to the shell, but this volume of POSIX.12017 has made this a requirement for all similar output. Therefore, consistency was chosen over this detail of historical practice. FUTURE DIRECTIONS top None. SEE ALSO top Section 2.9.5, Function Definition Command The Base Definitions volume of POSIX.12017, Chapter 8, Environment Variables COPYRIGHT top Portions of this text are reprinted and reproduced in electronic form from IEEE Std 1003.1-2017, Standard for Information Technology -- Portable Operating System Interface (POSIX), The Open Group Base Specifications Issue 7, 2018 Edition, Copyright (C) 2018 by the Institute of Electrical and Electronics Engineers, Inc and The Open Group. In the event of any discrepancy between this version and the original IEEE and The Open Group Standard, the original IEEE and The Open Group Standard is the referee document. The original Standard can be obtained online at http://www.opengroup.org/unix/online.html . Any typographical or formatting errors that appear in this page are most likely to have been introduced during the conversion of the source files to man page format. To report such errors, see https://www.kernel.org/doc/man-pages/reporting_bugs.html . IEEE/The Open Group 2017 ALIAS(1P) Pages that refer to this page: unalias(1p) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # alias\n\n> Creates aliases -- words that are replaced by a command string.\n> Aliases expire with the current shell session unless defined in the shell's configuration file, e.g. `~/.bashrc`.\n> More information: <https://tldp.org/LDP/abs/html/aliases.html>.\n\n- List all aliases:\n\n`alias`\n\n- Create a generic alias:\n\n`alias {{word}}="{{command}}"`\n\n- View the command associated to a given alias:\n\n`alias {{word}}`\n\n- Remove an aliased command:\n\n`unalias {{word}}`\n\n- Turn `rm` into an interactive command:\n\n`alias {{rm}}="{{rm --interactive}}"`\n\n- Create `la` as a shortcut for `ls --all`:\n\n`alias {{la}}="{{ls --all}}"`\n |
apropos | apropos(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training apropos(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | EXIT STATUS | ENVIRONMENT | FILES | SEE ALSO | AUTHOR | BUGS | COLOPHON APROPOS(1) Manual pager utils APROPOS(1) NAME top apropos - search the manual page names and descriptions SYNOPSIS top apropos [-dalv?V] [-e|-w|-r] [-s list] [-m system[,...]] [-M path] [-L locale] [-C file] keyword ... DESCRIPTION top Each manual page has a short description available within it. apropos searches the descriptions for instances of keyword. keyword is usually a regular expression, as if (-r) was used, or may contain wildcards (-w), or match the exact keyword (-e). Using these options, it may be necessary to quote the keyword or escape (\) the special characters to stop the shell from interpreting them. The standard matching rules allow matches to be made against the page name and word boundaries in the description. The database searched by apropos is updated by the mandb program. Depending on your installation, this may be run by a periodic cron job, or may need to be run manually after new manual pages have been installed. OPTIONS top -d, --debug Print debugging information. -v, --verbose Print verbose warning messages. -r, --regex Interpret each keyword as a regular expression. This is the default behaviour. Each keyword will be matched against the page names and the descriptions independently. It can match any part of either. The match is not limited to word boundaries. -w, --wildcard Interpret each keyword as a pattern containing shell style wildcards. Each keyword will be matched against the page names and the descriptions independently. If --exact is also used, a match will only be found if an expanded keyword matches an entire description or page name. Otherwise the keyword is also allowed to match on word boundaries in the description. -e, --exact Each keyword will be exactly matched against the page names and the descriptions. -a, --and Only display items that match all the supplied keywords. The default is to display items that match any keyword. -l, --long Do not trim output to the terminal width. Normally, output will be truncated to the terminal width to avoid ugly results from poorly-written NAME sections. -s list, --sections=list, --section=list Search only the given manual sections. list is a colon- or comma-separated list of sections. If an entry in list is a simple section, for example "3", then the displayed list of descriptions will include pages in sections "3", "3perl", "3x", and so on; while if an entry in list has an extension, for example "3perl", then the list will only include pages in that exact part of the manual section. -m system[,...], --systems=system[,...] If this system has access to other operating systems' manual page descriptions, they can be searched using this option. To search NewOS's manual page descriptions, use the option -m NewOS. The system specified can be a combination of comma- delimited operating system names. To include a search of the native operating system's whatis descriptions, include the system name man in the argument string. This option will override the $SYSTEM environment variable. -M path, --manpath=path Specify an alternate set of colon-delimited manual page hierarchies to search. By default, apropos uses the $MANPATH environment variable, unless it is empty or unset, in which case it will determine an appropriate manpath based on your $PATH environment variable. This option overrides the contents of $MANPATH. -L locale, --locale=locale apropos will normally determine your current locale by a call to the C function setlocale(3) which interrogates various environment variables, possibly including $LC_MESSAGES and $LANG. To temporarily override the determined value, use this option to supply a locale string directly to apropos. Note that it will not take effect until the search for pages actually begins. Output such as the help message will always be displayed in the initially determined locale. -C file, --config-file=file Use this user configuration file rather than the default of ~/.manpath. -?, --help Print a help message and exit. --usage Print a short usage message and exit. -V, --version Display version information. EXIT STATUS top 0 Successful program execution. 1 Usage, syntax or configuration file error. 2 Operational error. 16 Nothing was found that matched the criteria specified. ENVIRONMENT top SYSTEM If $SYSTEM is set, it will have the same effect as if it had been specified as the argument to the -m option. MANPATH If $MANPATH is set, its value is interpreted as the colon- delimited manual page hierarchy search path to use. See the SEARCH PATH section of manpath(5) for the default behaviour and details of how this environment variable is handled. MANWIDTH If $MANWIDTH is set, its value is used as the terminal width (see the --long option). If it is not set, the terminal width will be calculated using the value of $COLUMNS, and ioctl(2) if available, or falling back to 80 characters if all else fails. POSIXLY_CORRECT If $POSIXLY_CORRECT is set, even to a null value, the default apropos search will be as an extended regex (-r). Nowadays, this is the default behaviour anyway. FILES top /usr/share/man/index.(bt|db|dir|pag) A traditional global index database cache. /var/cache/man/index.(bt|db|dir|pag) An FHS compliant global index database cache. /usr/share/man/.../whatis A traditional whatis text database. SEE ALSO top man(1), whatis(1), mandb(8) AUTHOR top Wilf. (G.Wilford@ee.surrey.ac.uk). Fabrizio Polacco (fpolacco@debian.org). Colin Watson (cjwatson@debian.org). BUGS top https://gitlab.com/man-db/man-db/-/issues https://savannah.nongnu.org/bugs/?group=man-db COLOPHON top This page is part of the man-db (manual pager suite) project. Information about the project can be found at http://www.nongnu.org/man-db/. If you have a bug report for this manual page, send it to man-db-devel@nongnu.org. This page was obtained from the project's upstream Git repository https://gitlab.com/cjwatson/man-db on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-18.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org 2.12.0 2023-09-23 APROPOS(1) Pages that refer to this page: lexgrog(1), man(1), manpath(1), whatis(1), man(7) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # apropos\n\n> Search the manual pages for names and descriptions.\n> More information: <https://manned.org/apropos>.\n\n- Search for a keyword using a regular expression:\n\n`apropos {{regular_expression}}`\n\n- Search without restricting the output to the terminal width ([l]ong output):\n\n`apropos -l {{regular_expression}}`\n\n- Search for pages that match [a]ll the expressions given:\n\n`apropos {{regular_expression_1}} -a {{regular_expression_2}} -a {{regular_expression_3}}`\n |
ar | ar(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training ar(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | SEE ALSO | COPYRIGHT | COLOPHON AR(1) GNU Development Tools AR(1) NAME top ar - create, modify, and extract from archives SYNOPSIS top ar [-X32_64] [-]p[mod] [--plugin name] [--target bfdname] [--output dirname] [--record-libdeps libdeps] [--thin] [relpos] [count] archive [member...] DESCRIPTION top The GNU ar program creates, modifies, and extracts from archives. An archive is a single file holding a collection of other files in a structure that makes it possible to retrieve the original individual files (called members of the archive). The original files' contents, mode (permissions), timestamp, owner, and group are preserved in the archive, and can be restored on extraction. GNU ar can maintain archives whose members have names of any length; however, depending on how ar is configured on your system, a limit on member-name length may be imposed for compatibility with archive formats maintained with other tools. If it exists, the limit is often 15 characters (typical of formats related to a.out) or 16 characters (typical of formats related to coff). ar is considered a binary utility because archives of this sort are most often used as libraries holding commonly needed subroutines. Since libraries often will depend on other libraries, ar can also record the dependencies of a library when the --record-libdeps option is specified. ar creates an index to the symbols defined in relocatable object modules in the archive when you specify the modifier s. Once created, this index is updated in the archive whenever ar makes a change to its contents (save for the q update operation). An archive with such an index speeds up linking to the library, and allows routines in the library to call each other without regard to their placement in the archive. You may use nm -s or nm --print-armap to list this index table. If an archive lacks the table, another form of ar called ranlib can be used to add just the table. GNU ar can optionally create a thin archive, which contains a symbol index and references to the original copies of the member files of the archive. This is useful for building libraries for use within a local build tree, where the relocatable objects are expected to remain available, and copying the contents of each object would only waste time and space. An archive can either be thin or it can be normal. It cannot be both at the same time. Once an archive is created its format cannot be changed without first deleting it and then creating a new archive in its place. Thin archives are also flattened, so that adding one thin archive to another thin archive does not nest it, as would happen with a normal archive. Instead the elements of the first archive are added individually to the second archive. The paths to the elements of the archive are stored relative to the archive itself. GNU ar is designed to be compatible with two different facilities. You can control its activity using command-line options, like the different varieties of ar on Unix systems; or, if you specify the single command-line option -M, you can control it with a script supplied via standard input, like the MRI "librarian" program. OPTIONS top GNU ar allows you to mix the operation code p and modifier flags mod in any order, within the first command-line argument. If you wish, you may begin the first command-line argument with a dash. The p keyletter specifies what operation to execute; it may be any of the following, but you must specify only one of them: d Delete modules from the archive. Specify the names of modules to be deleted as member...; the archive is untouched if you specify no files to delete. If you specify the v modifier, ar lists each module as it is deleted. m Use this operation to move members in an archive. The ordering of members in an archive can make a difference in how programs are linked using the library, if a symbol is defined in more than one member. If no modifiers are used with "m", any members you name in the member arguments are moved to the end of the archive; you can use the a, b, or i modifiers to move them to a specified place instead. p Print the specified members of the archive, to the standard output file. If the v modifier is specified, show the member name before copying its contents to standard output. If you specify no member arguments, all the files in the archive are printed. q Quick append; Historically, add the files member... to the end of archive, without checking for replacement. The modifiers a, b, and i do not affect this operation; new members are always placed at the end of the archive. The modifier v makes ar list each file as it is appended. Since the point of this operation is speed, implementations of ar have the option of not updating the archive's symbol table if one exists. Too many different systems however assume that symbol tables are always up-to-date, so GNU ar will rebuild the table even with a quick append. Note - GNU ar treats the command qs as a synonym for r - replacing already existing files in the archive and appending new ones at the end. r Insert the files member... into archive (with replacement). This operation differs from q in that any previously existing members are deleted if their names match those being added. If one of the files named in member... does not exist, ar displays an error message, and leaves undisturbed any existing members of the archive matching that name. By default, new members are added at the end of the file; but you may use one of the modifiers a, b, or i to request placement relative to some existing member. The modifier v used with this operation elicits a line of output for each file inserted, along with one of the letters a or r to indicate whether the file was appended (no old member deleted) or replaced. s Add an index to the archive, or update it if it already exists. Note this command is an exception to the rule that there can only be one command letter, as it is possible to use it as either a command or a modifier. In either case it does the same thing. t Display a table listing the contents of archive, or those of the files listed in member... that are present in the archive. Normally only the member name is shown, but if the modifier O is specified, then the corresponding offset of the member is also displayed. Finally, in order to see the modes (permissions), timestamp, owner, group, and size the v modifier should be included. If you do not specify a member, all files in the archive are listed. If there is more than one file with the same name (say, fie) in an archive (say b.a), ar t b.a fie lists only the first instance; to see them all, you must ask for a complete listing---in our example, ar t b.a. x Extract members (named member) from the archive. You can use the v modifier with this operation, to request that ar list each name as it extracts it. If you do not specify a member, all files in the archive are extracted. Files cannot be extracted from a thin archive, and there are restrictions on extracting from archives created with P: The paths must not be absolute, may not contain "..", and any subdirectories in the paths must exist. If it is desired to avoid these restrictions then used the --output option to specify an output directory. A number of modifiers (mod) may immediately follow the p keyletter, to specify variations on an operation's behavior: a Add new files after an existing member of the archive. If you use the modifier a, the name of an existing archive member must be present as the relpos argument, before the archive specification. b Add new files before an existing member of the archive. If you use the modifier b, the name of an existing archive member must be present as the relpos argument, before the archive specification. (same as i). c Create the archive. The specified archive is always created if it did not exist, when you request an update. But a warning is issued unless you specify in advance that you expect to create it, by using this modifier. D Operate in deterministic mode. When adding files and the archive index use zero for UIDs, GIDs, timestamps, and use consistent file modes for all files. When this option is used, if ar is used with identical options and identical input files, multiple runs will create identical output files regardless of the input files' owners, groups, file modes, or modification times. If binutils was configured with --enable-deterministic-archives, then this mode is on by default. It can be disabled with the U modifier, below. f Truncate names in the archive. GNU ar will normally permit file names of any length. This will cause it to create archives which are not compatible with the native ar program on some systems. If this is a concern, the f modifier may be used to truncate file names when putting them in the archive. i Insert new files before an existing member of the archive. If you use the modifier i, the name of an existing archive member must be present as the relpos argument, before the archive specification. (same as b). l Specify dependencies of this library. The dependencies must immediately follow this option character, must use the same syntax as the linker command line, and must be specified within a single argument. I.e., if multiple items are needed, they must be quoted to form a single command line argument. For example L "-L/usr/local/lib -lmydep1 -lmydep2" N Uses the count parameter. This is used if there are multiple entries in the archive with the same name. Extract or delete instance count of the given name from the archive. o Preserve the original dates of members when extracting them. If you do not specify this modifier, files extracted from the archive are stamped with the time of extraction. O Display member offsets inside the archive. Use together with the t option. P Use the full path name when matching or storing names in the archive. Archives created with full path names are not POSIX compliant, and thus may not work with tools other than up to date GNU tools. Modifying such archives with GNU ar without using P will remove the full path names unless the archive is a thin archive. Note that P may be useful when adding files to a thin archive since r without P ignores the path when choosing which element to replace. Thus ar rcST archive.a subdir/file1 subdir/file2 file1 will result in the first "subdir/file1" being replaced with "file1" from the current directory. Adding P will prevent this replacement. s Write an object-file index into the archive, or update an existing one, even if no other change is made to the archive. You may use this modifier flag either with any operation, or alone. Running ar s on an archive is equivalent to running ranlib on it. S Do not generate an archive symbol table. This can speed up building a large library in several steps. The resulting archive can not be used with the linker. In order to build a symbol table, you must omit the S modifier on the last execution of ar, or you must run ranlib on the archive. T Deprecated alias for --thin. T is not recommended because in many ar implementations T has a different meaning, as specified by X/Open System Interface. u Normally, ar r... inserts all files listed into the archive. If you would like to insert only those of the files you list that are newer than existing members of the same names, use this modifier. The u modifier is allowed only for the operation r (replace). In particular, the combination qu is not allowed, since checking the timestamps would lose any speed advantage from the operation q. U Do not operate in deterministic mode. This is the inverse of the D modifier, above: added files and the archive index will get their actual UID, GID, timestamp, and file mode values. This is the default unless binutils was configured with --enable-deterministic-archives. v This modifier requests the verbose version of an operation. Many operations display additional information, such as filenames processed, when the modifier v is appended. V This modifier shows the version number of ar. The ar program also supports some command-line options which are neither modifiers nor actions, but which do change its behaviour in specific ways: --help Displays the list of command-line options supported by ar and then exits. --version Displays the version information of ar and then exits. -X32_64 ar ignores an initial option spelled -X32_64, for compatibility with AIX. The behaviour produced by this option is the default for GNU ar. ar does not support any of the other -X options; in particular, it does not support -X32 which is the default for AIX ar. --plugin name The optional command-line switch --plugin name causes ar to load the plugin called name which adds support for more file formats, including object files with link-time optimization information. This option is only available if the toolchain has been built with plugin support enabled. If --plugin is not provided, but plugin support has been enabled then ar iterates over the files in ${libdir}/bfd-plugins in alphabetic order and the first plugin that claims the object in question is used. Please note that this plugin search directory is not the one used by ld's -plugin option. In order to make ar use the linker plugin it must be copied into the ${libdir}/bfd-plugins directory. For GCC based compilations the linker plugin is called liblto_plugin.so.0.0.0. For Clang based compilations it is called LLVMgold.so. The GCC plugin is always backwards compatible with earlier versions, so it is sufficient to just copy the newest one. --target target The optional command-line switch --target bfdname specifies that the archive members are in an object code format different from your system's default format. See --output dirname The --output option can be used to specify a path to a directory into which archive members should be extracted. If this option is not specified then the current directory will be used. Note - although the presence of this option does imply a x extraction operation that option must still be included on the command line. --record-libdeps libdeps The --record-libdeps option is identical to the l modifier, just handled in long form. --thin Make the specified archive a thin archive. If it already exists and is a regular archive, the existing members must be present in the same directory as archive. @file Read command-line options from file. The options read are inserted in place of the original @file option. If file does not exist, or cannot be read, then the option will be treated literally, and not removed. Options in file are separated by whitespace. A whitespace character may be included in an option by surrounding the entire option in either single or double quotes. Any character (including a backslash) may be included by prefixing the character to be included with a backslash. The file may itself contain additional @file options; any such options will be processed recursively. SEE ALSO top nm(1), ranlib(1), and the Info entries for binutils. COPYRIGHT top Copyright (c) 1991-2023 Free Software Foundation, Inc. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with no Invariant Sections, with no Front-Cover Texts, and with no Back-Cover Texts. A copy of the license is included in the section entitled "GNU Free Documentation License". COLOPHON top This page is part of the binutils (a collection of tools for working with executable binaries) project. Information about the project can be found at http://www.gnu.org/software/binutils/. If you have a bug report for this manual page, see http://sourceware.org/bugzilla/enter_bug.cgi?product=binutils. This page was obtained from the tarball binutils-2.41.tar.gz fetched from https://ftp.gnu.org/gnu/binutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org binutils-2.41 2023-12-22 AR(1) Pages that refer to this page: ld(1), nm(1), ranlib(1), size(1), strings(1), uselib(2) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # ar\n\n> Create, modify, and extract from Unix archives. Typically used for static libraries (`.a`) and Debian packages (`.deb`).\n> See also: `tar`.\n> More information: <https://manned.org/ar>.\n\n- E[x]tract all members from an archive:\n\n`ar x {{path/to/file.a}}`\n\n- Lis[t] contents in a specific archive:\n\n`ar t {{path/to/file.ar}}`\n\n- [r]eplace or add specific files to an archive:\n\n`ar r {{path/to/file.deb}} {{path/to/debian-binary path/to/control.tar.gz path/to/data.tar.xz ...}}`\n\n- In[s]ert an object file index (equivalent to using `ranlib`):\n\n`ar s {{path/to/file.a}}`\n\n- Create an archive with specific files and an accompanying object file index:\n\n`ar rs {{path/to/file.a}} {{path/to/file1.o path/to/file2.o ...}}`\n |
arch | arch(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training arch(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | AUTHOR | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON ARCH(1) User Commands ARCH(1) NAME top arch - print machine hardware name (same as uname -m) SYNOPSIS top arch [OPTION]... DESCRIPTION top Print machine architecture. --help display this help and exit --version output version information and exit AUTHOR top Written by David MacKenzie and Karel Zak. REPORTING BUGS top GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT top Copyright 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top uname(1), uname(2) Full documentation <https://www.gnu.org/software/coreutils/arch> or available locally via: info '(coreutils) arch invocation' COLOPHON top This page is part of the coreutils (basic file, shell and text manipulation utilities) project. Information about the project can be found at http://www.gnu.org/software/coreutils/. If you have a bug report for this manual page, see http://www.gnu.org/software/coreutils/. This page was obtained from the tarball coreutils-9.4.tar.xz fetched from http://ftp.gnu.org/gnu/coreutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU coreutils 9.4 August 2023 ARCH(1) Pages that refer to this page: uname(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # arch\n\n> Display the name of the system architecture.\n> See also `uname`.\n> More information: <https://www.gnu.org/software/coreutils/arch>.\n\n- Display the system's architecture:\n\n`arch`\n |
arp | arp(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training arp(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | MODES | OPTIONS | EXAMPLES | FILES | SEE ALSO | AUTHORS | COLOPHON ARP(8) Linux System Administrator's Manual ARP(8) NAME top arp - manipulate the system ARP cache SYNOPSIS top arp [-vn] [-H type] [-i if] [-ae] [hostname] arp [-v] [-i if] -d hostname [pub] arp [-v] [-H type] [-i if] -s hostname hw_addr [temp] arp [-v] [-H type] [-i if] -s hostname hw_addr [netmask nm] pub arp [-v] [-H type] [-i if] -Ds hostname ifname [netmask nm] pub arp [-vnD] [-H type] [-i if] -f [filename] DESCRIPTION top Arp manipulates or displays the kernel's IPv4 network neighbour cache. It can add entries to the table, delete one or display the current content. ARP stands for Address Resolution Protocol, which is used to find the media access control address of a network neighbour for a given IPv4 Address. MODES top arp with no mode specifier will print the current content of the table. It is possible to limit the number of entries printed, by specifying an hardware address type, interface name or host address. arp -d address will delete a ARP table entry. Root or netadmin privilege is required to do this. The entry is found by IP address. If a hostname is given, it will be resolved before looking up the entry in the ARP table. arp -s address hw_addr is used to set up a new table entry. The format of the hw_addr parameter is dependent on the hardware class, but for most classes one can assume that the usual presentation can be used. For the Ethernet class, this is 6 bytes in hexadecimal, separated by colons. When adding proxy arp entries (that is those with the publish flag set) a netmask may be specified to proxy arp for entire subnets. This is not good practice, but is supported by older kernels because it can be useful. If the temp flag is not supplied entries will be permanent stored into the ARP cache. To simplify setting up entries for one of your own network interfaces, you can use the arp -Ds address ifname form. In that case the hardware address is taken from the interface with the specified name. OPTIONS top -v, --verbose Tell the user what is going on by being verbose. -n, --numeric shows numerical addresses instead of trying to determine symbolic host, port or user names. -H type, --hw-type type, -t type When setting or reading the ARP cache, this optional parameter tells arp which class of entries it should check for. The default value of this parameter is ether (i.e. hardware code 0x01 for IEEE 802.3 10Mbps Ethernet). Other values might include network technologies such as ARCnet (arcnet) , PROnet (pronet) , AX.25 (ax25) and NET/ROM (netrom). -a Use alternate BSD style output format (with no fixed columns). -e Use default Linux style output format (with fixed columns). -D, --use-device Instead of a hw_addr, the given argument is the name of an interface. arp will use the MAC address of that interface for the table entry. This is usually the best option to set up a proxy ARP entry to yourself. -i If, --device If Select an interface. When dumping the ARP cache only entries matching the specified interface will be printed. When setting a permanent or temp ARP entry this interface will be associated with the entry; if this option is not used, the kernel will guess based on the routing table. For pub entries the specified interface is the interface on which ARP requests will be answered. NOTE: This has to be different from the interface to which the IP datagrams will be routed. NOTE: As of kernel 2.2.0 it is no longer possible to set an ARP entry for an entire subnet. Linux instead does automagic proxy arp when a route exists and it is forwarding. See arp(7) for details. Also the dontpub option which is available for delete and set operations cannot be used with 2.4 and newer kernels. -f filename, --file filename Similar to the -s option, only this time the address info is taken from file filename. This can be used if ARP entries for a lot of hosts have to be set up. The name of the data file is very often /etc/ethers, but this is not official. If no filename is specified /etc/ethers is used as default. The format of the file is simple; it only contains ASCII text lines with a hostname, and a hardware address separated by whitespace. Additionally the pub, temp and netmask flags can be used. In all places where a hostname is expected, one can also enter an IP address in dotted-decimal notation. As a special case for compatibility the order of the hostname and the hardware address can be exchanged. Each complete entry in the ARP cache will be marked with the C flag. Permanent entries are marked with M and published entries have the P flag. EXAMPLES top /usr/sbin/arp -i eth0 -Ds 10.0.0.2 eth1 pub This will answer ARP requests for 10.0.0.2 on eth0 with the MAC address for eth1. /usr/sbin/arp -i eth1 -d 10.0.0.1 Delete the ARP table entry for 10.0.0.1 on interface eth1. This will match published proxy ARP entries and permanent entries. FILES top /proc/net/arp /etc/networks /etc/hosts /etc/ethers SEE ALSO top ethers(5), rarp(8), route(8), ifconfig(8), netstat(8) AUTHORS top Fred N. van Kempen <waltje@uwalt.nl.mugnet.org>, Bernd Eckenfels <net-tools@lina.inka.de>. COLOPHON top This page is part of the net-tools (networking utilities) project. Information about the project can be found at http://net-tools.sourceforge.net/. If you have a bug report for this manual page, see http://net-tools.sourceforge.net/. This page was obtained from the project's upstream Git repository git://git.code.sf.net/p/net-tools/code on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-06-29.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org net-tools 2008-10-03 ARP(8) Pages that refer to this page: ethers(5), proc(5), ifconfig(8), rarp(8), route(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # arp\n\n> Show and manipulate your system's ARP cache.\n> More information: <https://manned.org/arp>.\n\n- Show the current ARP table:\n\n`arp -a`\n\n- [d]elete a specific entry:\n\n`arp -d {{address}}`\n\n- [s]et up a new entry in the ARP table:\n\n`arp -s {{address}} {{mac_address}}`\n |
arping | arping(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training arping(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | SEE ALSO | AUTHOR | SECURITY | AVAILABILITY | COLOPHON ARPING(8) iputils ARPING(8) NAME top arping - send ARP REQUEST to a neighbour host SYNOPSIS top arping [-AbDfhqUV] [-c count] [-w deadline] [-i interval] [-s source] [-I interface] {destination} DESCRIPTION top Ping destination on device interface by ARP packets, using source address source. arping supports IPv4 addresses only. For IPv6, see ndisc6(8). OPTIONS top -A The same as -U, but ARP REPLY packets used instead of ARP REQUEST. -b Send only MAC level broadcasts. Normally arping starts from sending broadcast, and switch to unicast after reply received. -c count Stop after sending count ARP REQUEST packets. With deadline option, instead wait for count ARP REPLY packets, or until the timeout expires. -D Duplicate address detection mode (DAD). See RFC2131, 4.4.1. Returns 0, if DAD succeeded i.e. no replies are received. -f Finish after the first reply confirming that target is alive. -I interface Name of network device where to send ARP REQUEST packets. -h Print help page and exit. -q Quiet output. Nothing is displayed. -s source IP source address to use in ARP packets. If this option is absent, source address is: In DAD mode (with option -D) set to 0.0.0.0. In Unsolicited ARP mode (with options -U or -A) set to destination. Otherwise, it is calculated from routing tables. -U Unsolicited ARP mode to update neighbours' ARP caches. No replies are expected. -V Print version of the program and exit. -w deadline Specify a timeout, in seconds, before arping exits regardless of how many packets have been sent or received. If any replies are received, exit with status 0, otherwise status 1. When combined with the count option, exit with status 0 if count replies are received before the deadline expiration, otherwise status 1. -i interval Specify an interval, in seconds, between packets. SEE ALSO top ndisc6(8), ping(8), clockdiff(8), tracepath(8). AUTHOR top arping was written by Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>. SECURITY top arping requires CAP_NET_RAW capability to be executed. It is not recommended to be used as set-uid root, because it allows user to modify ARP caches of neighbour hosts. AVAILABILITY top arping is part of iputils package. COLOPHON top This page is part of the iputils (IP utilities) project. Information about the project can be found at http://www.skbuff.net/iputils/. If you have a bug report for this manual page, send it to yoshfuji@skbuff.net, netdev@vger.kernel.org. This page was obtained from the project's upstream Git repository https://github.com/iputils/iputils.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-22.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org iputils 20221126 ARPING(8) Pages that refer to this page: clockdiff(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # arping\n\n> Discover and probe hosts in a network using the ARP protocol.\n> Useful for MAC address discovery.\n> More information: <https://github.com/ThomasHabets/arping>.\n\n- Ping a host by ARP request packets:\n\n`arping {{host_ip}}`\n\n- Ping a host on a specific interface:\n\n`arping -I {{interface}} {{host_ip}}`\n\n- Ping a host and [f]inish after the first reply:\n\n`arping -f {{host_ip}}`\n\n- Ping a host a specific number ([c]ount) of times:\n\n`arping -c {{count}} {{host_ip}}`\n\n- Broadcast ARP request packets to update neighbours' ARP caches ([U]nsolicited ARP mode):\n\n`arping -U {{ip_to_broadcast}}`\n\n- [D]etect duplicated IP addresses in the network by sending ARP requests with a 3 second timeout:\n\n`arping -D -w {{3}} {{ip_to_check}}`\n |
as | as(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training as(1) Linux manual page NAME | SYNOPSIS | TARGET | DESCRIPTION | OPTIONS | SEE ALSO | COPYRIGHT | COLOPHON AS(1) GNU Development Tools AS(1) NAME top AS - the portable GNU assembler. SYNOPSIS top as [-a[cdghlns][=file]] [--alternate] [--compress-debug-sections] [--nocompress-debug-sections] [-D] [--dump-config] [--debug-prefix-map old=new] [--defsym sym=val] [--elf-stt-common=[no|yes]] [--emulation=name] [-f] [-g] [--gstabs] [--gstabs+] [--gdwarf-<N>] [--gdwarf-sections] [--gdwarf-cie-version=VERSION] [--generate-missing-build-notes=[no|yes]] [--gsframe] [--hash-size=N] [--help] [--target-help] [-I dir] [-J] [-K] [--keep-locals] [-L] [--listing-lhs-width=NUM] [--listing-lhs-width2=NUM] [--listing-rhs-width=NUM] [--listing-cont-lines=NUM] [--multibyte-handling=[allow|warn|warn-sym-only]] [--no-pad-sections] [-o objfile] [-R] [--sectname-subst] [--size-check=[error|warning]] [--statistics] [-v] [-version] [--version] [-W] [--warn] [--fatal-warnings] [-w] [-x] [-Z] [@FILE] [target-options] [--|files ...] TARGET top Target AArch64 options: [-EB|-EL] [-mabi=ABI] Target Alpha options: [-mcpu] [-mdebug | -no-mdebug] [-replace | -noreplace] [-relax] [-g] [-Gsize] [-F] [-32addr] Target ARC options: [-mcpu=cpu] [-mA6|-mARC600|-mARC601|-mA7|-mARC700|-mEM|-mHS] [-mcode-density] [-mrelax] [-EB|-EL] Target ARM options: [-mcpu=processor[+extension...]] [-march=architecture[+extension...]] [-mfpu=floating-point-format] [-mfloat-abi=abi] [-meabi=ver] [-mthumb] [-EB|-EL] [-mapcs-32|-mapcs-26|-mapcs-float| -mapcs-reentrant] [-mthumb-interwork] [-k] Target Blackfin options: [-mcpu=processor[-sirevision]] [-mfdpic] [-mno-fdpic] [-mnopic] Target BPF options: [-EL] [-EB] Target CRIS options: [--underscore | --no-underscore] [--pic] [-N] [--emulation=criself | --emulation=crisaout] [--march=v0_v10 | --march=v10 | --march=v32 | --march=common_v10_v32] Target C-SKY options: [-march=arch] [-mcpu=cpu] [-EL] [-mlittle-endian] [-EB] [-mbig-endian] [-fpic] [-pic] [-mljump] [-mno-ljump] [-force2bsr] [-mforce2bsr] [-no-force2bsr] [-mno-force2bsr] [-jsri2bsr] [-mjsri2bsr] [-no-jsri2bsr ] [-mno-jsri2bsr] [-mnolrw ] [-mno-lrw] [-melrw] [-mno-elrw] [-mlaf ] [-mliterals-after-func] [-mno-laf] [-mno-literals-after-func] [-mlabr] [-mliterals-after-br] [-mno-labr] [-mnoliterals-after-br] [-mistack] [-mno-istack] [-mhard-float] [-mmp] [-mcp] [-mcache] [-msecurity] [-mtrust] [-mdsp] [-medsp] [-mvdsp] Target D10V options: [-O] Target D30V options: [-O|-n|-N] Target EPIPHANY options: [-mepiphany|-mepiphany16] Target H8/300 options: [-h-tick-hex] Target i386 options: [--32|--x32|--64] [-n] [-march=CPU[+EXTENSION...]] [-mtune=CPU] Target IA-64 options: [-mconstant-gp|-mauto-pic] [-milp32|-milp64|-mlp64|-mp64] [-mle|mbe] [-mtune=itanium1|-mtune=itanium2] [-munwind-check=warning|-munwind-check=error] [-mhint.b=ok|-mhint.b=warning|-mhint.b=error] [-x|-xexplicit] [-xauto] [-xdebug] Target IP2K options: [-mip2022|-mip2022ext] Target M32C options: [-m32c|-m16c] [-relax] [-h-tick-hex] Target M32R options: [--m32rx|--[no-]warn-explicit-parallel-conflicts| --W[n]p] Target M680X0 options: [-l] [-m68000|-m68010|-m68020|...] Target M68HC11 options: [-m68hc11|-m68hc12|-m68hcs12|-mm9s12x|-mm9s12xg] [-mshort|-mlong] [-mshort-double|-mlong-double] [--force-long-branches] [--short-branches] [--strict-direct-mode] [--print-insn-syntax] [--print-opcodes] [--generate-example] Target MCORE options: [-jsri2bsr] [-sifilter] [-relax] [-mcpu=[210|340]] Target Meta options: [-mcpu=cpu] [-mfpu=cpu] [-mdsp=cpu] Target MICROBLAZE options: Target MIPS options: [-nocpp] [-EL] [-EB] [-O[optimization level]] [-g[debug level]] [-G num] [-KPIC] [-call_shared] [-non_shared] [-xgot [-mvxworks-pic] [-mabi=ABI] [-32] [-n32] [-64] [-mfp32] [-mgp32] [-mfp64] [-mgp64] [-mfpxx] [-modd-spreg] [-mno-odd-spreg] [-march=CPU] [-mtune=CPU] [-mips1] [-mips2] [-mips3] [-mips4] [-mips5] [-mips32] [-mips32r2] [-mips32r3] [-mips32r5] [-mips32r6] [-mips64] [-mips64r2] [-mips64r3] [-mips64r5] [-mips64r6] [-construct-floats] [-no-construct-floats] [-mignore-branch-isa] [-mno-ignore-branch-isa] [-mnan=encoding] [-trap] [-no-break] [-break] [-no-trap] [-mips16] [-no-mips16] [-mmips16e2] [-mno-mips16e2] [-mmicromips] [-mno-micromips] [-msmartmips] [-mno-smartmips] [-mips3d] [-no-mips3d] [-mdmx] [-no-mdmx] [-mdsp] [-mno-dsp] [-mdspr2] [-mno-dspr2] [-mdspr3] [-mno-dspr3] [-mmsa] [-mno-msa] [-mxpa] [-mno-xpa] [-mmt] [-mno-mt] [-mmcu] [-mno-mcu] [-mcrc] [-mno-crc] [-mginv] [-mno-ginv] [-mloongson-mmi] [-mno-loongson-mmi] [-mloongson-cam] [-mno-loongson-cam] [-mloongson-ext] [-mno-loongson-ext] [-mloongson-ext2] [-mno-loongson-ext2] [-minsn32] [-mno-insn32] [-mfix7000] [-mno-fix7000] [-mfix-rm7000] [-mno-fix-rm7000] [-mfix-vr4120] [-mno-fix-vr4120] [-mfix-vr4130] [-mno-fix-vr4130] [-mfix-r5900] [-mno-fix-r5900] [-mdebug] [-no-mdebug] [-mpdr] [-mno-pdr] Target MMIX options: [--fixed-special-register-names] [--globalize-symbols] [--gnu-syntax] [--relax] [--no-predefined-symbols] [--no-expand] [--no-merge-gregs] [-x] [--linker-allocated-gregs] Target Nios II options: [-relax-all] [-relax-section] [-no-relax] [-EB] [-EL] Target NDS32 options: [-EL] [-EB] [-O] [-Os] [-mcpu=cpu] [-misa=isa] [-mabi=abi] [-mall-ext] [-m[no-]16-bit] [-m[no-]perf-ext] [-m[no-]perf2-ext] [-m[no-]string-ext] [-m[no-]dsp-ext] [-m[no-]mac] [-m[no-]div] [-m[no-]audio-isa-ext] [-m[no-]fpu-sp-ext] [-m[no-]fpu-dp-ext] [-m[no-]fpu-fma] [-mfpu-freg=FREG] [-mreduced-regs] [-mfull-regs] [-m[no-]dx-regs] [-mpic] [-mno-relax] [-mb2bb] Target PDP11 options: [-mpic|-mno-pic] [-mall] [-mno-extensions] [-mextension|-mno-extension] [-mcpu] [-mmachine] Target picoJava options: [-mb|-me] Target PowerPC options: [-a32|-a64] [-mpwrx|-mpwr2|-mpwr|-m601|-mppc|-mppc32|-m603|-m604|-m403|-m405| -m440|-m464|-m476|-m7400|-m7410|-m7450|-m7455|-m750cl|-mgekko| -mbroadway|-mppc64|-m620|-me500|-e500x2|-me500mc|-me500mc64|-me5500| -me6500|-mppc64bridge|-mbooke|-mpower4|-mpwr4|-mpower5|-mpwr5|-mpwr5x| -mpower6|-mpwr6|-mpower7|-mpwr7|-mpower8|-mpwr8|-mpower9|-mpwr9-ma2| -mcell|-mspe|-mspe2|-mtitan|-me300|-mcom] [-many] [-maltivec|-mvsx|-mhtm|-mvle] [-mregnames|-mno-regnames] [-mrelocatable|-mrelocatable-lib|-K PIC] [-memb] [-mlittle|-mlittle-endian|-le|-mbig|-mbig-endian|-be] [-msolaris|-mno-solaris] [-nops=count] Target PRU options: [-link-relax] [-mnolink-relax] [-mno-warn-regname-label] Target RISC-V options: [-fpic|-fPIC|-fno-pic] [-march=ISA] [-mabi=ABI] [-mlittle-endian|-mbig-endian] Target RL78 options: [-mg10] [-m32bit-doubles|-m64bit-doubles] Target RX options: [-mlittle-endian|-mbig-endian] [-m32bit-doubles|-m64bit-doubles] [-muse-conventional-section-names] [-msmall-data-limit] [-mpid] [-mrelax] [-mint-register=number] [-mgcc-abi|-mrx-abi] Target s390 options: [-m31|-m64] [-mesa|-mzarch] [-march=CPU] [-mregnames|-mno-regnames] [-mwarn-areg-zero] Target SCORE options: [-EB][-EL][-FIXDD][-NWARN] [-SCORE5][-SCORE5U][-SCORE7][-SCORE3] [-march=score7][-march=score3] [-USE_R1][-KPIC][-O0][-G num][-V] Target SPARC options: [-Av6|-Av7|-Av8|-Aleon|-Asparclet|-Asparclite -Av8plus|-Av8plusa|-Av8plusb|-Av8plusc|-Av8plusd -Av8plusv|-Av8plusm|-Av9|-Av9a|-Av9b|-Av9c -Av9d|-Av9e|-Av9v|-Av9m|-Asparc|-Asparcvis -Asparcvis2|-Asparcfmaf|-Asparcima|-Asparcvis3 -Asparcvisr|-Asparc5] [-xarch=v8plus|-xarch=v8plusa]|-xarch=v8plusb|-xarch=v8plusc -xarch=v8plusd|-xarch=v8plusv|-xarch=v8plusm|-xarch=v9 -xarch=v9a|-xarch=v9b|-xarch=v9c|-xarch=v9d|-xarch=v9e -xarch=v9v|-xarch=v9m|-xarch=sparc|-xarch=sparcvis -xarch=sparcvis2|-xarch=sparcfmaf|-xarch=sparcima -xarch=sparcvis3|-xarch=sparcvisr|-xarch=sparc5 -bump] [-32|-64] [--enforce-aligned-data][--dcti-couples-detect] Target TIC54X options: [-mcpu=54[123589]|-mcpu=54[56]lp] [-mfar-mode|-mf] [-merrors-to-file <filename>|-me <filename>] Target TIC6X options: [-march=arch] [-mbig-endian|-mlittle-endian] [-mdsbt|-mno-dsbt] [-mpid=no|-mpid=near|-mpid=far] [-mpic|-mno-pic] Target TILE-Gx options: [-m32|-m64][-EB][-EL] Target Visium options: [-mtune=arch] Target Xtensa options: [--[no-]text-section-literals] [--[no-]auto-litpools] [--[no-]absolute-literals] [--[no-]target-align] [--[no-]longcalls] [--[no-]transform] [--rename-section oldname=newname] [--[no-]trampolines] [--abi-windowed|--abi-call0] Target Z80 options: [-march=CPU[-EXT][+EXT]] [-local-prefix=PREFIX] [-colonless] [-sdcc] [-fp-s=FORMAT] [-fp-d=FORMAT] DESCRIPTION top GNU as is really a family of assemblers. If you use (or have used) the GNU assembler on one architecture, you should find a fairly similar environment when you use it on another architecture. Each version has much in common with the others, including object file formats, most assembler directives (often called pseudo-ops) and assembler syntax. as is primarily intended to assemble the output of the GNU C compiler "gcc" for use by the linker "ld". Nevertheless, we've tried to make as assemble correctly everything that other assemblers for the same machine would assemble. Any exceptions are documented explicitly. This doesn't mean as always uses the same syntax as another assembler for the same architecture; for example, we know of several incompatible versions of 680x0 assembly language syntax. Each time you run as it assembles exactly one source program. The source program is made up of one or more files. (The standard input is also a file.) You give as a command line that has zero or more input file names. The input files are read (from left file name to right). A command-line argument (in any position) that has no special meaning is taken to be an input file name. If you give as no file names it attempts to read one input file from the as standard input, which is normally your terminal. You may have to type ctl-D to tell as there is no more program to assemble. Use -- if you need to explicitly name the standard input file in your command line. If the source is empty, as produces a small, empty object file. as may write warnings and error messages to the standard error file (usually your terminal). This should not happen when a compiler runs as automatically. Warnings report an assumption made so that as could keep assembling a flawed program; errors report a grave problem that stops the assembly. If you are invoking as via the GNU C compiler, you can use the -Wa option to pass arguments through to the assembler. The assembler arguments must be separated from each other (and the -Wa) by commas. For example: gcc -c -g -O -Wa,-alh,-L file.c This passes two options to the assembler: -alh (emit a listing to standard output with high-level and assembly source) and -L (retain local symbols in the symbol table). Usually you do not need to use this -Wa mechanism, since many compiler command-line options are automatically passed to the assembler by the compiler. (You can call the GNU compiler driver with the -v option to see precisely what options it passes to each compilation pass, including the assembler.) OPTIONS top @file Read command-line options from file. The options read are inserted in place of the original @file option. If file does not exist, or cannot be read, then the option will be treated literally, and not removed. Options in file are separated by whitespace. A whitespace character may be included in an option by surrounding the entire option in either single or double quotes. Any character (including a backslash) may be included by prefixing the character to be included with a backslash. The file may itself contain additional @file options; any such options will be processed recursively. -a[cdghlmns] Turn on listings, in any of a variety of ways: -ac omit false conditionals -ad omit debugging directives -ag include general information, like as version and options passed -ah include high-level source -al include assembly -am include macro expansions -an omit forms processing -as include symbols =file set the name of the listing file You may combine these options; for example, use -aln for assembly listing without forms processing. The =file option, if used, must be the last one. By itself, -a defaults to -ahls. --alternate Begin in alternate macro mode. --compress-debug-sections Compress DWARF debug sections using zlib with SHF_COMPRESSED from the ELF ABI. The resulting object file may not be compatible with older linkers and object file utilities. Note if compression would make a given section larger then it is not compressed. --compress-debug-sections=none --compress-debug-sections=zlib --compress-debug-sections=zlib-gnu --compress-debug-sections=zlib-gabi --compress-debug-sections=zstd These options control how DWARF debug sections are compressed. --compress-debug-sections=none is equivalent to --nocompress-debug-sections. --compress-debug-sections=zlib and --compress-debug-sections=zlib-gabi are equivalent to --compress-debug-sections. --compress-debug-sections=zlib-gnu compresses DWARF debug sections using the obsoleted zlib-gnu format. The debug sections are renamed to begin with .zdebug. --compress-debug-sections=zstd compresses DWARF debug sections using zstd. Note - if compression would actually make a section larger, then it is not compressed nor renamed. --nocompress-debug-sections Do not compress DWARF debug sections. This is usually the default for all targets except the x86/x86_64, but a configure time option can be used to override this. -D Enable denugging in target specific backends, if supported. Otherwise ignored. Even if ignored, this option is accepted for script compatibility with calls to other assemblers. --debug-prefix-map old=new When assembling files in directory old, record debugging information describing them as in new instead. --defsym sym=value Define the symbol sym to be value before assembling the input file. value must be an integer constant. As in C, a leading 0x indicates a hexadecimal value, and a leading 0 indicates an octal value. The value of the symbol can be overridden inside a source file via the use of a ".set" pseudo-op. --dump-config Displays how the assembler is configured and then exits. --elf-stt-common=no --elf-stt-common=yes These options control whether the ELF assembler should generate common symbols with the "STT_COMMON" type. The default can be controlled by a configure option --enable-elf-stt-common. --emulation=name If the assembler is configured to support multiple different target configurations then this option can be used to select the desired form. -f "fast"---skip whitespace and comment preprocessing (assume source is compiler output). -g --gen-debug Generate debugging information for each assembler source line using whichever debug format is preferred by the target. This currently means either STABS, ECOFF or DWARF2. When the debug format is DWARF then a ".debug_info" and ".debug_line" section is only emitted when the assembly file doesn't generate one itself. --gstabs Generate stabs debugging information for each assembler line. This may help debugging assembler code, if the debugger can handle it. --gstabs+ Generate stabs debugging information for each assembler line, with GNU extensions that probably only gdb can handle, and that could make other debuggers crash or refuse to read your program. This may help debugging assembler code. Currently the only GNU extension is the location of the current working directory at assembling time. --gdwarf-2 Generate DWARF2 debugging information for each assembler line. This may help debugging assembler code, if the debugger can handle it. Note---this option is only supported by some targets, not all of them. --gdwarf-3 This option is the same as the --gdwarf-2 option, except that it allows for the possibility of the generation of extra debug information as per version 3 of the DWARF specification. Note - enabling this option does not guarantee the generation of any extra information, the choice to do so is on a per target basis. --gdwarf-4 This option is the same as the --gdwarf-2 option, except that it allows for the possibility of the generation of extra debug information as per version 4 of the DWARF specification. Note - enabling this option does not guarantee the generation of any extra information, the choice to do so is on a per target basis. --gdwarf-5 This option is the same as the --gdwarf-2 option, except that it allows for the possibility of the generation of extra debug information as per version 5 of the DWARF specification. Note - enabling this option does not guarantee the generation of any extra information, the choice to do so is on a per target basis. --gdwarf-sections Instead of creating a .debug_line section, create a series of .debug_line.foo sections where foo is the name of the corresponding code section. For example a code section called .text.func will have its dwarf line number information placed into a section called .debug_line.text.func. If the code section is just called .text then debug line section will still be called just .debug_line without any suffix. --gdwarf-cie-version=version Control which version of DWARF Common Information Entries (CIEs) are produced. When this flag is not specificed the default is version 1, though some targets can modify this default. Other possible values for version are 3 or 4. --generate-missing-build-notes=yes --generate-missing-build-notes=no These options control whether the ELF assembler should generate GNU Build attribute notes if none are present in the input sources. The default can be controlled by the --enable-generate-build-notes configure option. --gsframe --gsframe Create .sframe section from CFI directives. --hash-size N Ignored. Supported for command line compatibility with other assemblers. --help Print a summary of the command-line options and exit. --target-help Print a summary of all target specific options and exit. -I dir Add directory dir to the search list for ".include" directives. -J Don't warn about signed overflow. -K Issue warnings when difference tables altered for long displacements. -L --keep-locals Keep (in the symbol table) local symbols. These symbols start with system-specific local label prefixes, typically .L for ELF systems or L for traditional a.out systems. --listing-lhs-width=number Set the maximum width, in words, of the output data column for an assembler listing to number. --listing-lhs-width2=number Set the maximum width, in words, of the output data column for continuation lines in an assembler listing to number. --listing-rhs-width=number Set the maximum width of an input source line, as displayed in a listing, to number bytes. --listing-cont-lines=number Set the maximum number of lines printed in a listing for a single line of input to number + 1. --multibyte-handling=allow --multibyte-handling=warn --multibyte-handling=warn-sym-only --multibyte-handling=warn_sym_only Controls how the assembler handles multibyte characters in the input. The default (which can be restored by using the allow argument) is to allow such characters without complaint. Using the warn argument will make the assembler generate a warning message whenever any multibyte character is encountered. Using the warn-sym-only argument will only cause a warning to be generated when a symbol is defined with a name that contains multibyte characters. (References to undefined symbols will not generate a warning). --no-pad-sections Stop the assembler for padding the ends of output sections to the alignment of that section. The default is to pad the sections, but this can waste space which might be needed on targets which have tight memory constraints. -o objfile Name the object-file output from as objfile. -R Fold the data section into the text section. --reduce-memory-overheads Ignored. Supported for compatibility with tools that apss the same option to both the assembler and the linker. --sectname-subst Honor substitution sequences in section names. --size-check=error --size-check=warning Issue an error or warning for invalid ELF .size directive. --statistics Print the maximum space (in bytes) and total time (in seconds) used by assembly. --strip-local-absolute Remove local absolute symbols from the outgoing symbol table. -v -version Print the as version. --version Print the as version and exit. -W --no-warn Suppress warning messages. --fatal-warnings Treat warnings as errors. --warn Don't suppress warning messages or treat them as errors. -w Ignored. -x Ignored. -Z Generate an object file even after errors. -- | files ... Standard input, or source files to assemble. The following options are available when as is configured for the 64-bit mode of the ARM Architecture (AArch64). -EB This option specifies that the output generated by the assembler should be marked as being encoded for a big-endian processor. -EL This option specifies that the output generated by the assembler should be marked as being encoded for a little- endian processor. -mabi=abi Specify which ABI the source code uses. The recognized arguments are: "ilp32" and "lp64", which decides the generated object file in ELF32 and ELF64 format respectively. The default is "lp64". -mcpu=processor[+extension...] This option specifies the target processor. The assembler will issue an error message if an attempt is made to assemble an instruction which will not execute on the target processor. The following processor names are recognized: "cortex-a34", "cortex-a35", "cortex-a53", "cortex-a55", "cortex-a57", "cortex-a65", "cortex-a65ae", "cortex-a72", "cortex-a73", "cortex-a75", "cortex-a76", "cortex-a76ae", "cortex-a77", "cortex-a78", "cortex-a78ae", "cortex-a78c", "cortex-a510", "cortex-a710", "ares", "exynos-m1", "falkor", "neoverse-n1", "neoverse-n2", "neoverse-e1", "neoverse-v1", "qdf24xx", "saphira", "thunderx", "vulcan", "xgene1" "xgene2", "cortex-r82", "cortex-x1", and "cortex-x2". The special name "all" may be used to allow the assembler to accept instructions valid for any supported processor, including all optional extensions. In addition to the basic instruction set, the assembler can be told to accept, or restrict, various extension mnemonics that extend the processor. If some implementations of a particular processor can have an extension, then then those extensions are automatically enabled. Consequently, you will not normally have to specify any additional extensions. -march=architecture[+extension...] This option specifies the target architecture. The assembler will issue an error message if an attempt is made to assemble an instruction which will not execute on the target architecture. The following architecture names are recognized: "armv8-a", "armv8.1-a", "armv8.2-a", "armv8.3-a", "armv8.4-a" "armv8.5-a", "armv8.6-a", "armv8.7-a", "armv8.8-a", "armv8-r", "armv9-a", "armv9.1-a", "armv9.2-a", and "armv9.3-a". If both -mcpu and -march are specified, the assembler will use the setting for -mcpu. If neither are specified, the assembler will default to -mcpu=all. The architecture option can be extended with the same instruction set extension options as the -mcpu option. Unlike -mcpu, extensions are not always enabled by default, -mverbose-error This option enables verbose error messages for AArch64 gas. This option is enabled by default. -mno-verbose-error This option disables verbose error messages in AArch64 gas. The following options are available when as is configured for an Alpha processor. -mcpu This option specifies the target processor. If an attempt is made to assemble an instruction which will not execute on the target processor, the assembler may either expand the instruction as a macro or issue an error message. This option is equivalent to the ".arch" directive. The following processor names are recognized: 21064, "21064a", 21066, 21068, 21164, "21164a", "21164pc", 21264, "21264a", "21264b", "ev4", "ev5", "lca45", "ev5", "ev56", "pca56", "ev6", "ev67", "ev68". The special name "all" may be used to allow the assembler to accept instructions valid for any Alpha processor. In order to support existing practice in OSF/1 with respect to ".arch", and existing practice within MILO (the Linux ARC bootloader), the numbered processor names (e.g. 21064) enable the processor-specific PALcode instructions, while the "electro-vlasic" names (e.g. "ev4") do not. -mdebug -no-mdebug Enables or disables the generation of ".mdebug" encapsulation for stabs directives and procedure descriptors. The default is to automatically enable ".mdebug" when the first stabs directive is seen. -relax This option forces all relocations to be put into the object file, instead of saving space and resolving some relocations at assembly time. Note that this option does not propagate all symbol arithmetic into the object file, because not all symbol arithmetic can be represented. However, the option can still be useful in specific applications. -replace -noreplace Enables or disables the optimization of procedure calls, both at assemblage and at link time. These options are only available for VMS targets and "-replace" is the default. See section 1.4.1 of the OpenVMS Linker Utility Manual. -g This option is used when the compiler generates debug information. When gcc is using mips-tfile to generate debug information for ECOFF, local labels must be passed through to the object file. Otherwise this option has no effect. -Gsize A local common symbol larger than size is placed in ".bss", while smaller symbols are placed in ".sbss". -F -32addr These options are ignored for backward compatibility. The following options are available when as is configured for an ARC processor. -mcpu=cpu This option selects the core processor variant. -EB | -EL Select either big-endian (-EB) or little-endian (-EL) output. -mcode-density Enable Code Density extension instructions. The following options are available when as is configured for the ARM processor family. -mcpu=processor[+extension...] Specify which ARM processor variant is the target. -march=architecture[+extension...] Specify which ARM architecture variant is used by the target. -mfpu=floating-point-format Select which Floating Point architecture is the target. -mfloat-abi=abi Select which floating point ABI is in use. -mthumb Enable Thumb only instruction decoding. -mapcs-32 | -mapcs-26 | -mapcs-float | -mapcs-reentrant Select which procedure calling convention is in use. -EB | -EL Select either big-endian (-EB) or little-endian (-EL) output. -mthumb-interwork Specify that the code has been generated with interworking between Thumb and ARM code in mind. -mccs Turns on CodeComposer Studio assembly syntax compatibility mode. -k Specify that PIC code has been generated. The following options are available when as is configured for the Blackfin processor family. -mcpu=processor[-sirevision] This option specifies the target processor. The optional sirevision is not used in assembler. It's here such that GCC can easily pass down its "-mcpu=" option. The assembler will issue an error message if an attempt is made to assemble an instruction which will not execute on the target processor. The following processor names are recognized: "bf504", "bf506", "bf512", "bf514", "bf516", "bf518", "bf522", "bf523", "bf524", "bf525", "bf526", "bf527", "bf531", "bf532", "bf533", "bf534", "bf535" (not implemented yet), "bf536", "bf537", "bf538", "bf539", "bf542", "bf542m", "bf544", "bf544m", "bf547", "bf547m", "bf548", "bf548m", "bf549", "bf549m", "bf561", and "bf592". -mfdpic Assemble for the FDPIC ABI. -mno-fdpic -mnopic Disable -mfdpic. The following options are available when as is configured for the Linux kernel BPF processor family. @chapter BPF Dependent Features Options -EB This option specifies that the assembler should emit big- endian eBPF. -EL This option specifies that the assembler should emit little- endian eBPF. Note that if no endianness option is specified in the command line, the host endianness is used. See the info pages for documentation of the CRIS-specific options. The following options are available when as is configured for the C-SKY processor family. -march=archname Assemble for architecture archname. The --help option lists valid values for archname. -mcpu=cpuname Assemble for architecture cpuname. The --help option lists valid values for cpuname. -EL -mlittle-endian Generate little-endian output. -EB -mbig-endian Generate big-endian output. -fpic -pic Generate position-independent code. -mljump -mno-ljump Enable/disable transformation of the short branch instructions "jbf", "jbt", and "jbr" to "jmpi". This option is for V2 processors only. It is ignored on CK801 and CK802 targets, which do not support the "jmpi" instruction, and is enabled by default for other processors. -mbranch-stub -mno-branch-stub Pass through "R_CKCORE_PCREL_IMM26BY2" relocations for "bsr" instructions to the linker. This option is only available for bare-metal C-SKY V2 ELF targets, where it is enabled by default. It cannot be used in code that will be dynamically linked against shared libraries. -force2bsr -mforce2bsr -no-force2bsr -mno-force2bsr Enable/disable transformation of "jbsr" instructions to "bsr". This option is always enabled (and -mno-force2bsr is ignored) for CK801/CK802 targets. It is also always enabled when -mbranch-stub is in effect. -jsri2bsr -mjsri2bsr -no-jsri2bsr -mno-jsri2bsr Enable/disable transformation of "jsri" instructions to "bsr". This option is enabled by default. -mnolrw -mno-lrw Enable/disable transformation of "lrw" instructions into a "movih"/"ori" pair. -melrw -mno-elrw Enable/disable extended "lrw" instructions. This option is enabled by default for CK800-series processors. -mlaf -mliterals-after-func -mno-laf -mno-literals-after-func Enable/disable placement of literal pools after each function. -mlabr -mliterals-after-br -mno-labr -mnoliterals-after-br Enable/disable placement of literal pools after unconditional branches. This option is enabled by default. -mistack -mno-istack Enable/disable interrupt stack instructions. This option is enabled by default on CK801, CK802, and CK802 processors. The following options explicitly enable certain optional instructions. These features are also enabled implicitly by using "-mcpu=" to specify a processor that supports it. -mhard-float Enable hard float instructions. -mmp Enable multiprocessor instructions. -mcp Enable coprocessor instructions. -mcache Enable cache prefetch instruction. -msecurity Enable C-SKY security instructions. -mtrust Enable C-SKY trust instructions. -mdsp Enable DSP instructions. -medsp Enable enhanced DSP instructions. -mvdsp Enable vector DSP instructions. The following options are available when as is configured for an Epiphany processor. -mepiphany Specifies that the both 32 and 16 bit instructions are allowed. This is the default behavior. -mepiphany16 Restricts the permitted instructions to just the 16 bit set. The following options are available when as is configured for an H8/300 processor. @chapter H8/300 Dependent Features Options The Renesas H8/300 version of "as" has one machine-dependent option: -h-tick-hex Support H'00 style hex constants in addition to 0x00 style. -mach=name Sets the H8300 machine variant. The following machine names are recognised: "h8300h", "h8300hn", "h8300s", "h8300sn", "h8300sx" and "h8300sxn". The following options are available when as is configured for an i386 processor. --32 | --x32 | --64 Select the word size, either 32 bits or 64 bits. --32 implies Intel i386 architecture, while --x32 and --64 imply AMD x86-64 architecture with 32-bit or 64-bit word-size respectively. These options are only available with the ELF object file format, and require that the necessary BFD support has been included (on a 32-bit platform you have to add --enable-64-bit-bfd to configure enable 64-bit usage and use x86-64 as target platform). -n By default, x86 GAS replaces multiple nop instructions used for alignment within code sections with multi-byte nop instructions such as leal 0(%esi,1),%esi. This switch disables the optimization if a single byte nop (0x90) is explicitly specified as the fill byte for alignment. --divide On SVR4-derived platforms, the character / is treated as a comment character, which means that it cannot be used in expressions. The --divide option turns / into a normal character. This does not disable / at the beginning of a line starting a comment, or affect using # for starting a comment. -march=CPU[+EXTENSION...] This option specifies the target processor. The assembler will issue an error message if an attempt is made to assemble an instruction which will not execute on the target processor. The following processor names are recognized: "i8086", "i186", "i286", "i386", "i486", "i586", "i686", "pentium", "pentiumpro", "pentiumii", "pentiumiii", "pentium4", "prescott", "nocona", "core", "core2", "corei7", "iamcu", "k6", "k6_2", "athlon", "opteron", "k8", "amdfam10", "bdver1", "bdver2", "bdver3", "bdver4", "znver1", "znver2", "znver3", "znver4", "btver1", "btver2", "generic32" and "generic64". In addition to the basic instruction set, the assembler can be told to accept various extension mnemonics. For example, "-march=i686+sse4+vmx" extends i686 with sse4 and vmx. The following extensions are currently supported: 8087, 287, 387, 687, "cmov", "fxsr", "mmx", "sse", "sse2", "sse3", "sse4a", "ssse3", "sse4.1", "sse4.2", "sse4", "avx", "avx2", "lahf_sahf", "monitor", "adx", "rdseed", "prfchw", "smap", "mpx", "sha", "rdpid", "ptwrite", "cet", "gfni", "vaes", "vpclmulqdq", "prefetchwt1", "clflushopt", "se1", "clwb", "movdiri", "movdir64b", "enqcmd", "serialize", "tsxldtrk", "kl", "widekl", "hreset", "avx512f", "avx512cd", "avx512er", "avx512pf", "avx512vl", "avx512bw", "avx512dq", "avx512ifma", "avx512vbmi", "avx512_4fmaps", "avx512_4vnniw", "avx512_vpopcntdq", "avx512_vbmi2", "avx512_vnni", "avx512_bitalg", "avx512_vp2intersect", "tdx", "avx512_bf16", "avx_vnni", "avx512_fp16", "prefetchi", "avx_ifma", "avx_vnni_int8", "cmpccxadd", "wrmsrns", "msrlist", "avx_ne_convert", "rao_int", "fred", "lkgs", "amx_int8", "amx_bf16", "amx_fp16", "amx_complex", "amx_tile", "vmx", "vmfunc", "smx", "xsave", "xsaveopt", "xsavec", "xsaves", "aes", "pclmul", "fsgsbase", "rdrnd", "f16c", "bmi2", "fma", "movbe", "ept", "lzcnt", "popcnt", "hle", "rtm", "tsx", "invpcid", "clflush", "mwaitx", "clzero", "wbnoinvd", "pconfig", "waitpkg", "uintr", "cldemote", "rdpru", "mcommit", "sev_es", "lwp", "fma4", "xop", "cx16", "syscall", "rdtscp", "3dnow", "3dnowa", "sse4a", "sse5", "snp", "invlpgb", "tlbsync", "svme" and "padlock". Note that these extension mnemonics can be prefixed with "no" to revoke the respective (and any dependent) functionality. When the ".arch" directive is used with -march, the ".arch" directive will take precedent. -mtune=CPU This option specifies a processor to optimize for. When used in conjunction with the -march option, only instructions of the processor specified by the -march option will be generated. Valid CPU values are identical to the processor list of -march=CPU. -msse2avx This option specifies that the assembler should encode SSE instructions with VEX prefix. -muse-unaligned-vector-move This option specifies that the assembler should encode aligned vector move as unaligned vector move. -msse-check=none -msse-check=warning -msse-check=error These options control if the assembler should check SSE instructions. -msse-check=none will make the assembler not to check SSE instructions, which is the default. -msse-check=warning will make the assembler issue a warning for any SSE instruction. -msse-check=error will make the assembler issue an error for any SSE instruction. -mavxscalar=128 -mavxscalar=256 These options control how the assembler should encode scalar AVX instructions. -mavxscalar=128 will encode scalar AVX instructions with 128bit vector length, which is the default. -mavxscalar=256 will encode scalar AVX instructions with 256bit vector length. WARNING: Don't use this for production code - due to CPU errata the resulting code may not work on certain models. -mvexwig=0 -mvexwig=1 These options control how the assembler should encode VEX.W-ignored (WIG) VEX instructions. -mvexwig=0 will encode WIG VEX instructions with vex.w = 0, which is the default. -mvexwig=1 will encode WIG EVEX instructions with vex.w = 1. WARNING: Don't use this for production code - due to CPU errata the resulting code may not work on certain models. -mevexlig=128 -mevexlig=256 -mevexlig=512 These options control how the assembler should encode length- ignored (LIG) EVEX instructions. -mevexlig=128 will encode LIG EVEX instructions with 128bit vector length, which is the default. -mevexlig=256 and -mevexlig=512 will encode LIG EVEX instructions with 256bit and 512bit vector length, respectively. -mevexwig=0 -mevexwig=1 These options control how the assembler should encode w-ignored (WIG) EVEX instructions. -mevexwig=0 will encode WIG EVEX instructions with evex.w = 0, which is the default. -mevexwig=1 will encode WIG EVEX instructions with evex.w = 1. -mmnemonic=att -mmnemonic=intel This option specifies instruction mnemonic for matching instructions. The ".att_mnemonic" and ".intel_mnemonic" directives will take precedent. -msyntax=att -msyntax=intel This option specifies instruction syntax when processing instructions. The ".att_syntax" and ".intel_syntax" directives will take precedent. -mnaked-reg This option specifies that registers don't require a % prefix. The ".att_syntax" and ".intel_syntax" directives will take precedent. -madd-bnd-prefix This option forces the assembler to add BND prefix to all branches, even if such prefix was not explicitly specified in the source code. -mno-shared On ELF target, the assembler normally optimizes out non-PLT relocations against defined non-weak global branch targets with default visibility. The -mshared option tells the assembler to generate code which may go into a shared library where all non-weak global branch targets with default visibility can be preempted. The resulting code is slightly bigger. This option only affects the handling of branch instructions. -mbig-obj On PE/COFF target this option forces the use of big object file format, which allows more than 32768 sections. -momit-lock-prefix=no -momit-lock-prefix=yes These options control how the assembler should encode lock prefix. This option is intended as a workaround for processors, that fail on lock prefix. This option can only be safely used with single-core, single-thread computers -momit-lock-prefix=yes will omit all lock prefixes. -momit-lock-prefix=no will encode lock prefix as usual, which is the default. -mfence-as-lock-add=no -mfence-as-lock-add=yes These options control how the assembler should encode lfence, mfence and sfence. -mfence-as-lock-add=yes will encode lfence, mfence and sfence as lock addl $0x0, (%rsp) in 64-bit mode and lock addl $0x0, (%esp) in 32-bit mode. -mfence-as-lock-add=no will encode lfence, mfence and sfence as usual, which is the default. -mrelax-relocations=no -mrelax-relocations=yes These options control whether the assembler should generate relax relocations, R_386_GOT32X, in 32-bit mode, or R_X86_64_GOTPCRELX and R_X86_64_REX_GOTPCRELX, in 64-bit mode. -mrelax-relocations=yes will generate relax relocations. -mrelax-relocations=no will not generate relax relocations. The default can be controlled by a configure option --enable-x86-relax-relocations. -malign-branch-boundary=NUM This option controls how the assembler should align branches with segment prefixes or NOP. NUM must be a power of 2. It should be 0 or no less than 16. Branches will be aligned within NUM byte boundary. -malign-branch-boundary=0, which is the default, doesn't align branches. -malign-branch=TYPE[+TYPE...] This option specifies types of branches to align. TYPE is combination of jcc, which aligns conditional jumps, fused, which aligns fused conditional jumps, jmp, which aligns unconditional jumps, call which aligns calls, ret, which aligns rets, indirect, which aligns indirect jumps and calls. The default is -malign-branch=jcc+fused+jmp. -malign-branch-prefix-size=NUM This option specifies the maximum number of prefixes on an instruction to align branches. NUM should be between 0 and 5. The default NUM is 5. -mbranches-within-32B-boundaries This option aligns conditional jumps, fused conditional jumps and unconditional jumps within 32 byte boundary with up to 5 segment prefixes on an instruction. It is equivalent to -malign-branch-boundary=32 -malign-branch=jcc+fused+jmp -malign-branch-prefix-size=5. The default doesn't align branches. -mlfence-after-load=no -mlfence-after-load=yes These options control whether the assembler should generate lfence after load instructions. -mlfence-after-load=yes will generate lfence. -mlfence-after-load=no will not generate lfence, which is the default. -mlfence-before-indirect-branch=none -mlfence-before-indirect-branch=all -mlfence-before-indirect-branch=register -mlfence-before-indirect-branch=memory These options control whether the assembler should generate lfence before indirect near branch instructions. -mlfence-before-indirect-branch=all will generate lfence before indirect near branch via register and issue a warning before indirect near branch via memory. It also implicitly sets -mlfence-before-ret=shl when there's no explicit -mlfence-before-ret=. -mlfence-before-indirect-branch=register will generate lfence before indirect near branch via register. -mlfence-before-indirect-branch=memory will issue a warning before indirect near branch via memory. -mlfence-before-indirect-branch=none will not generate lfence nor issue warning, which is the default. Note that lfence won't be generated before indirect near branch via register with -mlfence-after-load=yes since lfence will be generated after loading branch target register. -mlfence-before-ret=none -mlfence-before-ret=shl -mlfence-before-ret=or -mlfence-before-ret=yes -mlfence-before-ret=not These options control whether the assembler should generate lfence before ret. -mlfence-before-ret=or will generate generate or instruction with lfence. -mlfence-before-ret=shl/yes will generate shl instruction with lfence. -mlfence-before-ret=not will generate not instruction with lfence. -mlfence-before-ret=none will not generate lfence, which is the default. -mx86-used-note=no -mx86-used-note=yes These options control whether the assembler should generate GNU_PROPERTY_X86_ISA_1_USED and GNU_PROPERTY_X86_FEATURE_2_USED GNU property notes. The default can be controlled by the --enable-x86-used-note configure option. -mevexrcig=rne -mevexrcig=rd -mevexrcig=ru -mevexrcig=rz These options control how the assembler should encode SAE- only EVEX instructions. -mevexrcig=rne will encode RC bits of EVEX instruction with 00, which is the default. -mevexrcig=rd, -mevexrcig=ru and -mevexrcig=rz will encode SAE-only EVEX instructions with 01, 10 and 11 RC bits, respectively. -mamd64 -mintel64 This option specifies that the assembler should accept only AMD64 or Intel64 ISA in 64-bit mode. The default is to accept common, Intel64 only and AMD64 ISAs. -O0 | -O | -O1 | -O2 | -Os Optimize instruction encoding with smaller instruction size. -O and -O1 encode 64-bit register load instructions with 64-bit immediate as 32-bit register load instructions with 31-bit or 32-bits immediates, encode 64-bit register clearing instructions with 32-bit register clearing instructions, encode 256-bit/512-bit VEX/EVEX vector register clearing instructions with 128-bit VEX vector register clearing instructions, encode 128-bit/256-bit EVEX vector register load/store instructions with VEX vector register load/store instructions, and encode 128-bit/256-bit EVEX packed integer logical instructions with 128-bit/256-bit VEX packed integer logical. -O2 includes -O1 optimization plus encodes 256-bit/512-bit EVEX vector register clearing instructions with 128-bit EVEX vector register clearing instructions. In 64-bit mode VEX encoded instructions with commutative source operands will also have their source operands swapped if this allows using the 2-byte VEX prefix form instead of the 3-byte one. Certain forms of AND as well as OR with the same (register) operand specified twice will also be changed to TEST. -Os includes -O2 optimization plus encodes 16-bit, 32-bit and 64-bit register tests with immediate as 8-bit register test with immediate. -O0 turns off this optimization. The following options are available when as is configured for the Ubicom IP2K series. -mip2022ext Specifies that the extended IP2022 instructions are allowed. -mip2022 Restores the default behaviour, which restricts the permitted instructions to just the basic IP2022 ones. The following options are available when as is configured for the Renesas M32C and M16C processors. -m32c Assemble M32C instructions. -m16c Assemble M16C instructions (the default). -relax Enable support for link-time relaxations. -h-tick-hex Support H'00 style hex constants in addition to 0x00 style. The following options are available when as is configured for the Renesas M32R (formerly Mitsubishi M32R) series. --m32rx Specify which processor in the M32R family is the target. The default is normally the M32R, but this option changes it to the M32RX. --warn-explicit-parallel-conflicts or --Wp Produce warning messages when questionable parallel constructs are encountered. --no-warn-explicit-parallel-conflicts or --Wnp Do not produce warning messages when questionable parallel constructs are encountered. The following options are available when as is configured for the Motorola 68000 series. -l Shorten references to undefined symbols, to one word instead of two. -m68000 | -m68008 | -m68010 | -m68020 | -m68030 | -m68040 | -m68060 | -m68302 | -m68331 | -m68332 | -m68333 | -m68340 | -mcpu32 | -m5200 Specify what processor in the 68000 family is the target. The default is normally the 68020, but this can be changed at configuration time. -m68881 | -m68882 | -mno-68881 | -mno-68882 The target machine does (or does not) have a floating-point coprocessor. The default is to assume a coprocessor for 68020, 68030, and cpu32. Although the basic 68000 is not compatible with the 68881, a combination of the two can be specified, since it's possible to do emulation of the coprocessor instructions with the main processor. -m68851 | -mno-68851 The target machine does (or does not) have a memory- management unit coprocessor. The default is to assume an MMU for 68020 and up. The following options are available when as is configured for an Altera Nios II processor. -relax-section Replace identified out-of-range branches with PC-relative "jmp" sequences when possible. The generated code sequences are suitable for use in position-independent code, but there is a practical limit on the extended branch range because of the length of the sequences. This option is the default. -relax-all Replace branch instructions not determinable to be in range and all call instructions with "jmp" and "callr" sequences (respectively). This option generates absolute relocations against the target symbols and is not appropriate for position-independent code. -no-relax Do not replace any branches or calls. -EB Generate big-endian output. -EL Generate little-endian output. This is the default. -march=architecture This option specifies the target architecture. The assembler issues an error message if an attempt is made to assemble an instruction which will not execute on the target architecture. The following architecture names are recognized: "r1", "r2". The default is "r1". The following options are available when as is configured for a PRU processor. -mlink-relax Assume that LD would optimize LDI32 instructions by checking the upper 16 bits of the expression. If they are all zeros, then LD would shorten the LDI32 instruction to a single LDI. In such case "as" will output DIFF relocations for diff expressions. -mno-link-relax Assume that LD would not optimize LDI32 instructions. As a consequence, DIFF relocations will not be emitted. -mno-warn-regname-label Do not warn if a label name matches a register name. Usually assembler programmers will want this warning to be emitted. C compilers may want to turn this off. The following options are available when as is configured for a MIPS processor. -G num This option sets the largest size of an object that can be referenced implicitly with the "gp" register. It is only accepted for targets that use ECOFF format, such as a DECstation running Ultrix. The default value is 8. -EB Generate "big endian" format output. -EL Generate "little endian" format output. -mips1 -mips2 -mips3 -mips4 -mips5 -mips32 -mips32r2 -mips32r3 -mips32r5 -mips32r6 -mips64 -mips64r2 -mips64r3 -mips64r5 -mips64r6 Generate code for a particular MIPS Instruction Set Architecture level. -mips1 is an alias for -march=r3000, -mips2 is an alias for -march=r6000, -mips3 is an alias for -march=r4000 and -mips4 is an alias for -march=r8000. -mips5, -mips32, -mips32r2, -mips32r3, -mips32r5, -mips32r6, -mips64, -mips64r2, -mips64r3, -mips64r5, and -mips64r6 correspond to generic MIPS V, MIPS32, MIPS32 Release 2, MIPS32 Release 3, MIPS32 Release 5, MIPS32 Release 6, MIPS64, MIPS64 Release 2, MIPS64 Release 3, MIPS64 Release 5, and MIPS64 Release 6 ISA processors, respectively. -march=cpu Generate code for a particular MIPS CPU. -mtune=cpu Schedule and tune for a particular MIPS CPU. -mfix7000 -mno-fix7000 Cause nops to be inserted if the read of the destination register of an mfhi or mflo instruction occurs in the following two instructions. -mfix-rm7000 -mno-fix-rm7000 Cause nops to be inserted if a dmult or dmultu instruction is followed by a load instruction. -mfix-r5900 -mno-fix-r5900 Do not attempt to schedule the preceding instruction into the delay slot of a branch instruction placed at the end of a short loop of six instructions or fewer and always schedule a "nop" instruction there instead. The short loop bug under certain conditions causes loops to execute only once or twice, due to a hardware bug in the R5900 chip. -mdebug -no-mdebug Cause stabs-style debugging output to go into an ECOFF-style .mdebug section instead of the standard ELF .stabs sections. -mpdr -mno-pdr Control generation of ".pdr" sections. -mgp32 -mfp32 The register sizes are normally inferred from the ISA and ABI, but these flags force a certain group of registers to be treated as 32 bits wide at all times. -mgp32 controls the size of general-purpose registers and -mfp32 controls the size of floating-point registers. -mgp64 -mfp64 The register sizes are normally inferred from the ISA and ABI, but these flags force a certain group of registers to be treated as 64 bits wide at all times. -mgp64 controls the size of general-purpose registers and -mfp64 controls the size of floating-point registers. -mfpxx The register sizes are normally inferred from the ISA and ABI, but using this flag in combination with -mabi=32 enables an ABI variant which will operate correctly with floating- point registers which are 32 or 64 bits wide. -modd-spreg -mno-odd-spreg Enable use of floating-point operations on odd-numbered single-precision registers when supported by the ISA. -mfpxx implies -mno-odd-spreg, otherwise the default is -modd-spreg. -mips16 -no-mips16 Generate code for the MIPS 16 processor. This is equivalent to putting ".module mips16" at the start of the assembly file. -no-mips16 turns off this option. -mmips16e2 -mno-mips16e2 Enable the use of MIPS16e2 instructions in MIPS16 mode. This is equivalent to putting ".module mips16e2" at the start of the assembly file. -mno-mips16e2 turns off this option. -mmicromips -mno-micromips Generate code for the microMIPS processor. This is equivalent to putting ".module micromips" at the start of the assembly file. -mno-micromips turns off this option. This is equivalent to putting ".module nomicromips" at the start of the assembly file. -msmartmips -mno-smartmips Enables the SmartMIPS extension to the MIPS32 instruction set. This is equivalent to putting ".module smartmips" at the start of the assembly file. -mno-smartmips turns off this option. -mips3d -no-mips3d Generate code for the MIPS-3D Application Specific Extension. This tells the assembler to accept MIPS-3D instructions. -no-mips3d turns off this option. -mdmx -no-mdmx Generate code for the MDMX Application Specific Extension. This tells the assembler to accept MDMX instructions. -no-mdmx turns off this option. -mdsp -mno-dsp Generate code for the DSP Release 1 Application Specific Extension. This tells the assembler to accept DSP Release 1 instructions. -mno-dsp turns off this option. -mdspr2 -mno-dspr2 Generate code for the DSP Release 2 Application Specific Extension. This option implies -mdsp. This tells the assembler to accept DSP Release 2 instructions. -mno-dspr2 turns off this option. -mdspr3 -mno-dspr3 Generate code for the DSP Release 3 Application Specific Extension. This option implies -mdsp and -mdspr2. This tells the assembler to accept DSP Release 3 instructions. -mno-dspr3 turns off this option. -mmsa -mno-msa Generate code for the MIPS SIMD Architecture Extension. This tells the assembler to accept MSA instructions. -mno-msa turns off this option. -mxpa -mno-xpa Generate code for the MIPS eXtended Physical Address (XPA) Extension. This tells the assembler to accept XPA instructions. -mno-xpa turns off this option. -mmt -mno-mt Generate code for the MT Application Specific Extension. This tells the assembler to accept MT instructions. -mno-mt turns off this option. -mmcu -mno-mcu Generate code for the MCU Application Specific Extension. This tells the assembler to accept MCU instructions. -mno-mcu turns off this option. -mcrc -mno-crc Generate code for the MIPS cyclic redundancy check (CRC) Application Specific Extension. This tells the assembler to accept CRC instructions. -mno-crc turns off this option. -mginv -mno-ginv Generate code for the Global INValidate (GINV) Application Specific Extension. This tells the assembler to accept GINV instructions. -mno-ginv turns off this option. -mloongson-mmi -mno-loongson-mmi Generate code for the Loongson MultiMedia extensions Instructions (MMI) Application Specific Extension. This tells the assembler to accept MMI instructions. -mno-loongson-mmi turns off this option. -mloongson-cam -mno-loongson-cam Generate code for the Loongson Content Address Memory (CAM) instructions. This tells the assembler to accept Loongson CAM instructions. -mno-loongson-cam turns off this option. -mloongson-ext -mno-loongson-ext Generate code for the Loongson EXTensions (EXT) instructions. This tells the assembler to accept Loongson EXT instructions. -mno-loongson-ext turns off this option. -mloongson-ext2 -mno-loongson-ext2 Generate code for the Loongson EXTensions R2 (EXT2) instructions. This option implies -mloongson-ext. This tells the assembler to accept Loongson EXT2 instructions. -mno-loongson-ext2 turns off this option. -minsn32 -mno-insn32 Only use 32-bit instruction encodings when generating code for the microMIPS processor. This option inhibits the use of any 16-bit instructions. This is equivalent to putting ".set insn32" at the start of the assembly file. -mno-insn32 turns off this option. This is equivalent to putting ".set noinsn32" at the start of the assembly file. By default -mno-insn32 is selected, allowing all instructions to be used. --construct-floats --no-construct-floats The --no-construct-floats option disables the construction of double width floating point constants by loading the two halves of the value into the two single width floating point registers that make up the double width register. By default --construct-floats is selected, allowing construction of these floating point constants. --relax-branch --no-relax-branch The --relax-branch option enables the relaxation of out-of- range branches. By default --no-relax-branch is selected, causing any out-of-range branches to produce an error. -mignore-branch-isa -mno-ignore-branch-isa Ignore branch checks for invalid transitions between ISA modes. The semantics of branches does not provide for an ISA mode switch, so in most cases the ISA mode a branch has been encoded for has to be the same as the ISA mode of the branch's target label. Therefore GAS has checks implemented that verify in branch assembly that the two ISA modes match. -mignore-branch-isa disables these checks. By default -mno-ignore-branch-isa is selected, causing any invalid branch requiring a transition between ISA modes to produce an error. -mnan=encoding Select between the IEEE 754-2008 (-mnan=2008) or the legacy (-mnan=legacy) NaN encoding format. The latter is the default. --emulation=name This option was formerly used to switch between ELF and ECOFF output on targets like IRIX 5 that supported both. MIPS ECOFF support was removed in GAS 2.24, so the option now serves little purpose. It is retained for backwards compatibility. The available configuration names are: mipself, mipslelf and mipsbelf. Choosing mipself now has no effect, since the output is always ELF. mipslelf and mipsbelf select little- and big-endian output respectively, but -EL and -EB are now the preferred options instead. -nocpp as ignores this option. It is accepted for compatibility with the native tools. --trap --no-trap --break --no-break Control how to deal with multiplication overflow and division by zero. --trap or --no-break (which are synonyms) take a trap exception (and only work for Instruction Set Architecture level 2 and higher); --break or --no-trap (also synonyms, and the default) take a break exception. -n When this option is used, as will issue a warning every time it generates a nop instruction from a macro. The following options are available when as is configured for a LoongArch processor. -fpic -fPIC Generate position-independent code -fno-pic Don't generate position-independent code (default) The following options are available when as is configured for a Meta processor. "-mcpu=metac11" Generate code for Meta 1.1. "-mcpu=metac12" Generate code for Meta 1.2. "-mcpu=metac21" Generate code for Meta 2.1. "-mfpu=metac21" Allow code to use FPU hardware of Meta 2.1. See the info pages for documentation of the MMIX-specific options. The following options are available when as is configured for a NDS32 processor. "-O1" Optimize for performance. "-Os" Optimize for space. "-EL" Produce little endian data output. "-EB" Produce little endian data output. "-mpic" Generate PIC. "-mno-fp-as-gp-relax" Suppress fp-as-gp relaxation for this file. "-mb2bb-relax" Back-to-back branch optimization. "-mno-all-relax" Suppress all relaxation for this file. "-march=<arch name>" Assemble for architecture <arch name> which could be v3, v3j, v3m, v3f, v3s, v2, v2j, v2f, v2s. "-mbaseline=<baseline>" Assemble for baseline <baseline> which could be v2, v3, v3m. "-mfpu-freg=FREG" Specify a FPU configuration. "0 8 SP / 4 DP registers" "1 16 SP / 8 DP registers" "2 32 SP / 16 DP registers" "3 32 SP / 32 DP registers" "-mabi=abi" Specify a abi version <abi> could be v1, v2, v2fp, v2fpp. "-m[no-]mac" Enable/Disable Multiply instructions support. "-m[no-]div" Enable/Disable Divide instructions support. "-m[no-]16bit-ext" Enable/Disable 16-bit extension "-m[no-]dx-regs" Enable/Disable d0/d1 registers "-m[no-]perf-ext" Enable/Disable Performance extension "-m[no-]perf2-ext" Enable/Disable Performance extension 2 "-m[no-]string-ext" Enable/Disable String extension "-m[no-]reduced-regs" Enable/Disable Reduced Register configuration (GPR16) option "-m[no-]audio-isa-ext" Enable/Disable AUDIO ISA extension "-m[no-]fpu-sp-ext" Enable/Disable FPU SP extension "-m[no-]fpu-dp-ext" Enable/Disable FPU DP extension "-m[no-]fpu-fma" Enable/Disable FPU fused-multiply-add instructions "-mall-ext" Turn on all extensions and instructions support The following options are available when as is configured for a PowerPC processor. -a32 Generate ELF32 or XCOFF32. -a64 Generate ELF64 or XCOFF64. -K PIC Set EF_PPC_RELOCATABLE_LIB in ELF flags. -mpwrx | -mpwr2 Generate code for POWER/2 (RIOS2). -mpwr Generate code for POWER (RIOS1) -m601 Generate code for PowerPC 601. -mppc, -mppc32, -m603, -m604 Generate code for PowerPC 603/604. -m403, -m405 Generate code for PowerPC 403/405. -m440 Generate code for PowerPC 440. BookE and some 405 instructions. -m464 Generate code for PowerPC 464. -m476 Generate code for PowerPC 476. -m7400, -m7410, -m7450, -m7455 Generate code for PowerPC 7400/7410/7450/7455. -m750cl, -mgekko, -mbroadway Generate code for PowerPC 750CL/Gekko/Broadway. -m821, -m850, -m860 Generate code for PowerPC 821/850/860. -mppc64, -m620 Generate code for PowerPC 620/625/630. -me200z2, -me200z4 Generate code for e200 variants, e200z2 with LSP, e200z4 with SPE. -me300 Generate code for PowerPC e300 family. -me500, -me500x2 Generate code for Motorola e500 core complex. -me500mc Generate code for Freescale e500mc core complex. -me500mc64 Generate code for Freescale e500mc64 core complex. -me5500 Generate code for Freescale e5500 core complex. -me6500 Generate code for Freescale e6500 core complex. -mlsp Enable LSP instructions. (Disables SPE and SPE2.) -mspe Generate code for Motorola SPE instructions. (Disables LSP.) -mspe2 Generate code for Freescale SPE2 instructions. (Disables LSP.) -mtitan Generate code for AppliedMicro Titan core complex. -mppc64bridge Generate code for PowerPC 64, including bridge insns. -mbooke Generate code for 32-bit BookE. -ma2 Generate code for A2 architecture. -maltivec Generate code for processors with AltiVec instructions. -mvle Generate code for Freescale PowerPC VLE instructions. -mvsx Generate code for processors with Vector-Scalar (VSX) instructions. -mhtm Generate code for processors with Hardware Transactional Memory instructions. -mpower4, -mpwr4 Generate code for Power4 architecture. -mpower5, -mpwr5, -mpwr5x Generate code for Power5 architecture. -mpower6, -mpwr6 Generate code for Power6 architecture. -mpower7, -mpwr7 Generate code for Power7 architecture. -mpower8, -mpwr8 Generate code for Power8 architecture. -mpower9, -mpwr9 Generate code for Power9 architecture. -mpower10, -mpwr10 Generate code for Power10 architecture. -mfuture Generate code for 'future' architecture. -mcell -mcell Generate code for Cell Broadband Engine architecture. -mcom Generate code Power/PowerPC common instructions. -many Generate code for any architecture (PWR/PWRX/PPC). -mregnames Allow symbolic names for registers. -mno-regnames Do not allow symbolic names for registers. -mrelocatable Support for GCC's -mrelocatable option. -mrelocatable-lib Support for GCC's -mrelocatable-lib option. -memb Set PPC_EMB bit in ELF flags. -mlittle, -mlittle-endian, -le Generate code for a little endian machine. -mbig, -mbig-endian, -be Generate code for a big endian machine. -msolaris Generate code for Solaris. -mno-solaris Do not generate code for Solaris. -nops=count If an alignment directive inserts more than count nops, put a branch at the beginning to skip execution of the nops. The following options are available when as is configured for a RISC-V processor. -fpic -fPIC Generate position-independent code -fno-pic Don't generate position-independent code (default) -march=ISA Select the base isa, as specified by ISA. For example -march=rv32ima. If this option and the architecture attributes aren't set, then assembler will check the default configure setting --with-arch=ISA. -misa-spec=ISAspec Select the default isa spec version. If the version of ISA isn't set by -march, then assembler helps to set the version according to the default chosen spec. If this option isn't set, then assembler will check the default configure setting --with-isa-spec=ISAspec. -mpriv-spec=PRIVspec Select the privileged spec version. We can decide whether the CSR is valid or not according to the chosen spec. If this option and the privilege attributes aren't set, then assembler will check the default configure setting --with-priv-spec=PRIVspec. -mabi=ABI Selects the ABI, which is either "ilp32" or "lp64", optionally followed by "f", "d", or "q" to indicate single- precision, double-precision, or quad-precision floating-point calling convention, or none to indicate the soft-float calling convention. Also, "ilp32" can optionally be followed by "e" to indicate the RVE ABI, which is always soft-float. -mrelax Take advantage of linker relaxations to reduce the number of instructions required to materialize symbol addresses. (default) -mno-relax Don't do linker relaxations. -march-attr Generate the default contents for the riscv elf attribute section if the .attribute directives are not set. This section is used to record the information that a linker or runtime loader needs to check compatibility. This information includes ISA string, stack alignment requirement, unaligned memory accesses, and the major, minor and revision version of privileged specification. -mno-arch-attr Don't generate the default riscv elf attribute section if the .attribute directives are not set. -mcsr-check Enable the CSR checking for the ISA-dependent CRS and the read-only CSR. The ISA-dependent CSR are only valid when the specific ISA is set. The read-only CSR can not be written by the CSR instructions. -mno-csr-check Don't do CSR checking. -mlittle-endian Generate code for a little endian machine. -mbig-endian Generate code for a big endian machine. See the info pages for documentation of the RX-specific options. The following options are available when as is configured for the s390 processor family. -m31 -m64 Select the word size, either 31/32 bits or 64 bits. -mesa -mzarch Select the architecture mode, either the Enterprise System Architecture (esa) or the z/Architecture mode (zarch). -march=processor Specify which s390 processor variant is the target, g5 (or arch3), g6, z900 (or arch5), z990 (or arch6), z9-109, z9-ec (or arch7), z10 (or arch8), z196 (or arch9), zEC12 (or arch10), z13 (or arch11), z14 (or arch12), z15 (or arch13), or z16 (or arch14). -mregnames -mno-regnames Allow or disallow symbolic names for registers. -mwarn-areg-zero Warn whenever the operand for a base or index register has been specified but evaluates to zero. The following options are available when as is configured for a TMS320C6000 processor. -march=arch Enable (only) instructions from architecture arch. By default, all instructions are permitted. The following values of arch are accepted: "c62x", "c64x", "c64x+", "c67x", "c67x+", "c674x". -mdsbt -mno-dsbt The -mdsbt option causes the assembler to generate the "Tag_ABI_DSBT" attribute with a value of 1, indicating that the code is using DSBT addressing. The -mno-dsbt option, the default, causes the tag to have a value of 0, indicating that the code does not use DSBT addressing. The linker will emit a warning if objects of different type (DSBT and non-DSBT) are linked together. -mpid=no -mpid=near -mpid=far The -mpid= option causes the assembler to generate the "Tag_ABI_PID" attribute with a value indicating the form of data addressing used by the code. -mpid=no, the default, indicates position-dependent data addressing, -mpid=near indicates position-independent addressing with GOT accesses using near DP addressing, and -mpid=far indicates position- independent addressing with GOT accesses using far DP addressing. The linker will emit a warning if objects built with different settings of this option are linked together. -mpic -mno-pic The -mpic option causes the assembler to generate the "Tag_ABI_PIC" attribute with a value of 1, indicating that the code is using position-independent code addressing, The "-mno-pic" option, the default, causes the tag to have a value of 0, indicating position-dependent code addressing. The linker will emit a warning if objects of different type (position-dependent and position-independent) are linked together. -mbig-endian -mlittle-endian Generate code for the specified endianness. The default is little-endian. The following options are available when as is configured for a TILE-Gx processor. -m32 | -m64 Select the word size, either 32 bits or 64 bits. -EB | -EL Select the endianness, either big-endian (-EB) or little- endian (-EL). The following option is available when as is configured for a Visium processor. -mtune=arch This option specifies the target architecture. If an attempt is made to assemble an instruction that will not execute on the target architecture, the assembler will issue an error message. The following names are recognized: "mcm24" "mcm" "gr5" "gr6" The following options are available when as is configured for an Xtensa processor. --text-section-literals | --no-text-section-literals Control the treatment of literal pools. The default is --no-text-section-literals, which places literals in separate sections in the output file. This allows the literal pool to be placed in a data RAM/ROM. With --text-section-literals, the literals are interspersed in the text section in order to keep them as close as possible to their references. This may be necessary for large assembly files, where the literals would otherwise be out of range of the "L32R" instructions in the text section. Literals are grouped into pools following ".literal_position" directives or preceding "ENTRY" instructions. These options only affect literals referenced via PC-relative "L32R" instructions; literals for absolute mode "L32R" instructions are handled separately. --auto-litpools | --no-auto-litpools Control the treatment of literal pools. The default is --no-auto-litpools, which in the absence of --text-section-literals places literals in separate sections in the output file. This allows the literal pool to be placed in a data RAM/ROM. With --auto-litpools, the literals are interspersed in the text section in order to keep them as close as possible to their references, explicit ".literal_position" directives are not required. This may be necessary for very large functions, where single literal pool at the beginning of the function may not be reachable by "L32R" instructions at the end. These options only affect literals referenced via PC-relative "L32R" instructions; literals for absolute mode "L32R" instructions are handled separately. When used together with --text-section-literals, --auto-litpools takes precedence. --absolute-literals | --no-absolute-literals Indicate to the assembler whether "L32R" instructions use absolute or PC-relative addressing. If the processor includes the absolute addressing option, the default is to use absolute "L32R" relocations. Otherwise, only the PC- relative "L32R" relocations can be used. --target-align | --no-target-align Enable or disable automatic alignment to reduce branch penalties at some expense in code size. This optimization is enabled by default. Note that the assembler will always align instructions like "LOOP" that have fixed alignment requirements. --longcalls | --no-longcalls Enable or disable transformation of call instructions to allow calls across a greater range of addresses. This option should be used when call targets can potentially be out of range. It may degrade both code size and performance, but the linker can generally optimize away the unnecessary overhead when a call ends up within range. The default is --no-longcalls. --transform | --no-transform Enable or disable all assembler transformations of Xtensa instructions, including both relaxation and optimization. The default is --transform; --no-transform should only be used in the rare cases when the instructions must be exactly as specified in the assembly source. Using --no-transform causes out of range instruction operands to be errors. --rename-section oldname=newname Rename the oldname section to newname. This option can be used multiple times to rename multiple sections. --trampolines | --no-trampolines Enable or disable transformation of jump instructions to allow jumps across a greater range of addresses. This option should be used when jump targets can potentially be out of range. In the absence of such jumps this option does not affect code size or performance. The default is --trampolines. --abi-windowed | --abi-call0 Choose ABI tag written to the ".xtensa.info" section. ABI tag indicates ABI of the assembly code. A warning is issued by the linker on an attempt to link object files with inconsistent ABI tags. Default ABI is chosen by the Xtensa core configuration. The following options are available when as is configured for an Z80 processor. @chapter Z80 Dependent Features Command-line Options -march=CPU[-EXT...][+EXT...] This option specifies the target processor. The assembler will issue an error message if an attempt is made to assemble an instruction which will not execute on the target processor. The following processor names are recognized: "z80", "z180", "ez80", "gbz80", "z80n", "r800". In addition to the basic instruction set, the assembler can be told to accept some extention mnemonics. For example, "-march=z180+sli+infc" extends z180 with SLI instructions and IN F,(C). The following extentions are currently supported: "full" (all known instructions), "adl" (ADL CPU mode by default, eZ80 only), "sli" (instruction known as SLI, SLL or SL1), "xyhl" (instructions with halves of index registers: IXL, IXH, IYL, IYH), "xdcb" (instructions like RotOp (II+d),R and BitOp n,(II+d),R), "infc" (instruction IN F,(C) or IN (C)), "outc0" (instruction OUT (C),0). Note that rather than extending a basic instruction set, the extention mnemonics starting with "-" revoke the respective functionality: "-march=z80-full+xyhl" first removes all default extentions and adds support for index registers halves only. If this option is not specified then "-march=z80+xyhl+infc" is assumed. -local-prefix=prefix Mark all labels with specified prefix as local. But such label can be marked global explicitly in the code. This option do not change default local label prefix ".L", it is just adds new one. -colonless Accept colonless labels. All symbols at line begin are treated as labels. -sdcc Accept assembler code produced by SDCC. -fp-s=FORMAT Single precision floating point numbers format. Default: ieee754 (32 bit). -fp-d=FORMAT Double precision floating point numbers format. Default: ieee754 (64 bit). SEE ALSO top gcc(1), ld(1), and the Info entries for binutils and ld. COPYRIGHT top Copyright (c) 1991-2023 Free Software Foundation, Inc. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with no Invariant Sections, with no Front-Cover Texts, and with no Back-Cover Texts. A copy of the license is included in the section entitled "GNU Free Documentation License". COLOPHON top This page is part of the binutils (a collection of tools for working with executable binaries) project. Information about the project can be found at http://www.gnu.org/software/binutils/. If you have a bug report for this manual page, see http://sourceware.org/bugzilla/enter_bug.cgi?product=binutils. This page was obtained from the tarball binutils-2.41.tar.gz fetched from https://ftp.gnu.org/gnu/binutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org binutils-2.41 2023-12-22 AS(1) Pages that refer to this page: elf(5) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # as\n\n> Portable GNU assembler.\n> Primarily intended to assemble output from `gcc` to be used by `ld`.\n> More information: <https://manned.org/as>.\n\n- Assemble a file, writing the output to `a.out`:\n\n`as {{file.s}}`\n\n- Assemble the output to a given file:\n\n`as {{file.s}} -o {{out.o}}`\n\n- Generate output faster by skipping whitespace and comment preprocessing. (Should only be used for trusted compilers):\n\n`as -f {{file.s}}`\n\n- Include a given path to the list of directories to search for files specified in `.include` directives:\n\n`as -I {{path/to/directory}} {{file.s}}`\n |
at | at(1p) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training at(1p) Linux manual page PROLOG | NAME | SYNOPSIS | DESCRIPTION | OPTIONS | OPERANDS | STDIN | INPUT FILES | ENVIRONMENT VARIABLES | ASYNCHRONOUS EVENTS | STDOUT | STDERR | OUTPUT FILES | EXTENDED DESCRIPTION | EXIT STATUS | CONSEQUENCES OF ERRORS | APPLICATION USAGE | EXAMPLES | RATIONALE | FUTURE DIRECTIONS | SEE ALSO | COPYRIGHT AT(1P) POSIX Programmer's Manual AT(1P) PROLOG top This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux. NAME top at execute commands at a later time SYNOPSIS top at [-m] [-f file] [-q queuename] -t time_arg at [-m] [-f file] [-q queuename] timespec... at -r at_job_id... at -l -q queuename at -l [at_job_id...] DESCRIPTION top The at utility shall read commands from standard input and group them together as an at-job, to be executed at a later time. The at-job shall be executed in a separate invocation of the shell, running in a separate process group with no controlling terminal, except that the environment variables, current working directory, file creation mask, and other implementation-defined execution-time attributes in effect when the at utility is executed shall be retained and used when the at-job is executed. When the at-job is submitted, the at_job_id and scheduled time shall be written to standard error. The at_job_id is an identifier that shall be a string consisting solely of alphanumeric characters and the <period> character. The at_job_id shall be assigned by the system when the job is scheduled such that it uniquely identifies a particular job. User notification and the processing of the job's standard output and standard error are described under the -m option. Users shall be permitted to use at if their name appears in the file at.allow which is located in an implementation-defined directory. If that file does not exist, the file at.deny, which is located in an implementation-defined directory, shall be checked to determine whether the user shall be denied access to at. If neither file exists, only a process with appropriate privileges shall be allowed to submit a job. If only at.deny exists and is empty, global usage shall be permitted. The at.allow and at.deny files shall consist of one user name per line. OPTIONS top The at utility shall conform to the Base Definitions volume of POSIX.12017, Section 12.2, Utility Syntax Guidelines. The following options shall be supported: -f file Specify the pathname of a file to be used as the source of the at-job, instead of standard input. -l (The letter ell.) Report all jobs scheduled for the invoking user if no at_job_id operands are specified. If at_job_ids are specified, report only information for these jobs. The output shall be written to standard output. -m Send mail to the invoking user after the at-job has run, announcing its completion. Standard output and standard error produced by the at-job shall be mailed to the user as well, unless redirected elsewhere. Mail shall be sent even if the job produces no output. If -m is not used, the job's standard output and standard error shall be provided to the user by means of mail, unless they are redirected elsewhere; if there is no such output to provide, the implementation need not notify the user of the job's completion. -q queuename Specify in which queue to schedule a job for submission. When used with the -l option, limit the search to that particular queue. By default, at-jobs shall be scheduled in queue a. In contrast, queue b shall be reserved for batch jobs; see batch. The meanings of all other queuenames are implementation- defined. If -q is specified along with either of the -t time_arg or timespec arguments, the results are unspecified. -r Remove the jobs with the specified at_job_id operands that were previously scheduled by the at utility. -t time_arg Submit the job to be run at the time specified by the time option-argument, which the application shall ensure has the format as specified by the touch -t time utility. OPERANDS top The following operands shall be supported: at_job_id The name reported by a previous invocation of the at utility at the time the job was scheduled. timespec Submit the job to be run at the date and time specified. All of the timespec operands are interpreted as if they were separated by <space> characters and concatenated, and shall be parsed as described in the grammar at the end of this section. The date and time shall be interpreted as being in the timezone of the user (as determined by the TZ variable), unless a timezone name appears as part of time, below. In the POSIX locale, the following describes the three parts of the time specification string. All of the values from the LC_TIME categories in the POSIX locale shall be recognized in a case-insensitive manner. time The time can be specified as one, two, or four digits. One-digit and two-digit numbers shall be taken to be hours; four-digit numbers to be hours and minutes. The time can alternatively be specified as two numbers separated by a <colon>, meaning hour:minute. An AM/PM indication (one of the values from the am_pm keywords in the LC_TIME locale category) can follow the time; otherwise, a 24-hour clock time shall be understood. A timezone name can also follow to further qualify the time. The acceptable timezone names are implementation-defined, except that they shall be case-insensitive and the string utc is supported to indicate the time is in Coordinated Universal Time. In the POSIX locale, the time field can also be one of the following tokens: midnight Indicates the time 12:00 am (00:00). noon Indicates the time 12:00 pm. now Indicates the current day and time. Invoking at <now> shall submit an at-job for potentially immediate execution (that is, subject only to unspecified scheduling delays). date An optional date can be specified as either a month name (one of the values from the mon or abmon keywords in the LC_TIME locale category) followed by a day number (and possibly year number preceded by a comma), or a day of the week (one of the values from the day or abday keywords in the LC_TIME locale category). In the POSIX locale, two special days shall be recognized: today Indicates the current day. tomorrow Indicates the day following the current day. If no date is given, today shall be assumed if the given time is greater than the current time, and tomorrow shall be assumed if it is less. If the given month is less than the current month (and no year is given), next year shall be assumed. increment The optional increment shall be a number preceded by a <plus-sign> ('+') and suffixed by one of the following: minutes, hours, days, weeks, months, or years. (The singular forms shall also be accepted.) The keyword next shall be equivalent to an increment number of +1. For example, the following are equivalent commands: at 2pm + 1 week at 2pm next week The following grammar describes the precise format of timespec in the POSIX locale. The general conventions for this style of grammar are described in Section 1.3, Grammar Conventions. This formal syntax shall take precedence over the preceding text syntax description. The longest possible token or delimiter shall be recognized at a given point. When used in a timespec, white space shall also delimit tokens. %token hr24clock_hr_min %token hr24clock_hour /* An hr24clock_hr_min is a one, two, or four-digit number. A one-digit or two-digit number constitutes an hr24clock_hour. An hr24clock_hour may be any of the single digits [0,9], or may be double digits, ranging from [00,23]. If an hr24clock_hr_min is a four-digit number, the first two digits shall be a valid hr24clock_hour, while the last two represent the number of minutes, from [00,59]. */ %token wallclock_hr_min %token wallclock_hour /* A wallclock_hr_min is a one, two-digit, or four-digit number. A one-digit or two-digit number constitutes a wallclock_hour. A wallclock_hour may be any of the single digits [1,9], or may be double digits, ranging from [01,12]. If a wallclock_hr_min is a four-digit number, the first two digits shall be a valid wallclock_hour, while the last two represent the number of minutes, from [00,59]. */ %token minute /* A minute is a one or two-digit number whose value can be [0,9] or [00,59]. */ %token day_number /* A day_number is a number in the range appropriate for the particular month and year specified by month_name and year_number, respectively. If no year_number is given, the current year is assumed if the given date and time are later this year. If no year_number is given and the date and time have already occurred this year and the month is not the current month, next year is the assumed year. */ %token year_number /* A year_number is a four-digit number representing the year A.D., in which the at_job is to be run. */ %token inc_number /* The inc_number is the number of times the succeeding increment period is to be added to the specified date and time. */ %token timezone_name /* The name of an optional timezone suffix to the time field, in an implementation-defined format. */ %token month_name /* One of the values from the mon or abmon keywords in the LC_TIME locale category. */ %token day_of_week /* One of the values from the day or abday keywords in the LC_TIME locale category. */ %token am_pm /* One of the values from the am_pm keyword in the LC_TIME locale category. */ %start timespec %% timespec : time | time date | time increment | time date increment | nowspec ; nowspec : "now" | "now" increment ; time : hr24clock_hr_min | hr24clock_hr_min timezone_name | hr24clock_hour ":" minute | hr24clock_hour ":" minute timezone_name | wallclock_hr_min am_pm | wallclock_hr_min am_pm timezone_name | wallclock_hour ":" minute am_pm | wallclock_hour ":" minute am_pm timezone_name | "noon" | "midnight" ; date : month_name day_number | month_name day_number "," year_number | day_of_week | "today" | "tomorrow" ; increment : "+" inc_number inc_period | "next" inc_period ; inc_period : "minute" | "minutes" | "hour" | "hours" | "day" | "days" | "week" | "weeks" | "month" | "months" | "year" | "years" ; STDIN top The standard input shall be a text file consisting of commands acceptable to the shell command language described in Chapter 2, Shell Command Language. The standard input shall only be used if no -f file option is specified. INPUT FILES top See the STDIN section. The text files at.allow and at.deny, which are located in an implementation-defined directory, shall contain zero or more user names, one per line, of users who are, respectively, authorized or denied access to the at and batch utilities. ENVIRONMENT VARIABLES top The following environment variables shall affect the execution of at: LANG Provide a default value for the internationalization variables that are unset or null. (See the Base Definitions volume of POSIX.12017, Section 8.2, Internationalization Variables for the precedence of internationalization variables used to determine the values of locale categories.) LC_ALL If set to a non-empty string value, override the values of all the other internationalization variables. LC_CTYPE Determine the locale for the interpretation of sequences of bytes of text data as characters (for example, single-byte as opposed to multi-byte characters in arguments and input files). LC_MESSAGES Determine the locale that should be used to affect the format and contents of diagnostic messages written to standard error and informative messages written to standard output. NLSPATH Determine the location of message catalogs for the processing of LC_MESSAGES. LC_TIME Determine the format and contents for date and time strings written and accepted by at. SHELL Determine a name of a command interpreter to be used to invoke the at-job. If the variable is unset or null, sh shall be used. If it is set to a value other than a name for sh, the implementation shall do one of the following: use that shell; use sh; use the login shell from the user database; or any of the preceding accompanied by a warning diagnostic about which was chosen. TZ Determine the timezone. The job shall be submitted for execution at the time specified by timespec or -t time relative to the timezone specified by the TZ variable. If timespec specifies a timezone, it shall override TZ. If timespec does not specify a timezone and TZ is unset or null, an unspecified default timezone shall be used. ASYNCHRONOUS EVENTS top Default. STDOUT top When standard input is a terminal, prompts of unspecified format for each line of the user input described in the STDIN section may be written to standard output. In the POSIX locale, the following shall be written to the standard output for each job when jobs are listed in response to the -l option: "%s\t%s\n", at_job_id, <date> where date shall be equivalent in format to the output of: date +"%a %b %e %T %Y" The date and time written shall be adjusted so that they appear in the timezone of the user (as determined by the TZ variable). STDERR top In the POSIX locale, the following shall be written to standard error when a job has been successfully submitted: "job %s at %s\n", at_job_id, <date> where date has the same format as that described in the STDOUT section. Neither this, nor warning messages concerning the selection of the command interpreter, shall be considered a diagnostic that changes the exit status. Diagnostic messages, if any, shall be written to standard error. OUTPUT FILES top None. EXTENDED DESCRIPTION top None. EXIT STATUS top The following exit values shall be returned: 0 The at utility successfully submitted, removed, or listed a job or jobs. >0 An error occurred. CONSEQUENCES OF ERRORS top The job shall not be scheduled, removed, or listed. The following sections are informative. APPLICATION USAGE top The format of the at command line shown here is guaranteed only for the POSIX locale. Other cultures may be supported with substantially different interfaces, although implementations are encouraged to provide comparable levels of functionality. Since the commands run in a separate shell invocation, running in a separate process group with no controlling terminal, open file descriptors, traps, and priority inherited from the invoking environment are lost. Some implementations do not allow substitution of different shells using SHELL. System V systems, for example, have used the login shell value for the user in /etc/passwd. To select reliably another command interpreter, the user must include it as part of the script, such as: $ at 1800 myshell myscript EOT job ... at ... $ EXAMPLES top 1. This sequence can be used at a terminal: at -m 0730 tomorrow sort < file >outfile EOT 2. This sequence, which demonstrates redirecting standard error to a pipe, is useful in a command procedure (the sequence of output redirection specifications is significant): at now + 1 hour <<! diff file1 file2 2>&1 >outfile | mailx mygroup ! 3. To have a job reschedule itself, at can be invoked from within the at-job. For example, this daily processing script named my.daily runs every day (although crontab is a more appropriate vehicle for such work): # my.daily runs every day daily processing at now tomorrow < my.daily 4. The spacing of the three portions of the POSIX locale timespec is quite flexible as long as there are no ambiguities. Examples of various times and operand presentation include: at 0815am Jan 24 at 8 :15amjan24 at now "+ 1day" at 5 pm FRIday at '17 utc+ 30minutes' RATIONALE top The at utility reads from standard input the commands to be executed at a later time. It may be useful to redirect standard output and standard error within the specified commands. The -t time option was added as a new capability to support an internationalized way of specifying a time for execution of the submitted job. Early proposals added a ``jobname'' concept as a way of giving submitted jobs names that are meaningful to the user submitting them. The historical, system-specified at_job_id gives no indication of what the job is. Upon further reflection, it was decided that the benefit of this was not worth the change in historical interface. The at functionality is useful in simple environments, but in large or complex situations, the functionality provided by the Batch Services option is more suitable. The -q option historically has been an undocumented option, used mainly by the batch utility. The System V -m option was added to provide a method for informing users that an at-job had completed. Otherwise, users are only informed when output to standard error or standard output are not redirected. The behavior of at <now> was changed in an early proposal from being unspecified to submitting a job for potentially immediate execution. Historical BSD at implementations support this. Historical System V implementations give an error in that case, but a change to the System V versions should have no backwards- compatibility ramifications. On BSD-based systems, a -u user option has allowed those with appropriate privileges to access the work of other users. Since this is primarily a system administration feature and is not universally implemented, it has been omitted. Similarly, a specification for the output format for a user with appropriate privileges viewing the queues of other users has been omitted. The -f file option from System V is used instead of the BSD method of using the last operand as the pathname. The BSD method is ambiguousdoes: at 1200 friday mean the same thing if there is a file named friday in the current directory? The at_job_id is composed of a limited character set in historical practice, and it is mandated here to invalidate systems that might try using characters that require shell quoting or that could not be easily parsed by shell scripts. The at utility varies between System V and BSD systems in the way timezones are used. On System V systems, the TZ variable affects the at-job submission times and the times displayed for the user. On BSD systems, TZ is not taken into account. The BSD behavior is easily achieved with the current specification. If the user wishes to have the timezone default to that of the system, they merely need to issue the at command immediately following an unsetting or null assignment to TZ. For example: TZ= at noon ... gives the desired BSD result. While the yacc-like grammar specified in the OPERANDS section is lexically unambiguous with respect to the digit strings, a lexical analyzer would probably be written to look for and return digit strings in those cases. The parser could then check whether the digit string returned is a valid day_number, year_number, and so on, based on the context. FUTURE DIRECTIONS top None. SEE ALSO top batch(1p), crontab(1p) The Base Definitions volume of POSIX.12017, Chapter 8, Environment Variables, Section 12.2, Utility Syntax Guidelines COPYRIGHT top Portions of this text are reprinted and reproduced in electronic form from IEEE Std 1003.1-2017, Standard for Information Technology -- Portable Operating System Interface (POSIX), The Open Group Base Specifications Issue 7, 2018 Edition, Copyright (C) 2018 by the Institute of Electrical and Electronics Engineers, Inc and The Open Group. In the event of any discrepancy between this version and the original IEEE and The Open Group Standard, the original IEEE and The Open Group Standard is the referee document. The original Standard can be obtained online at http://www.opengroup.org/unix/online.html . Any typographical or formatting errors that appear in this page are most likely to have been introduced during the conversion of the source files to man page format. To report such errors, see https://www.kernel.org/doc/man-pages/reporting_bugs.html . IEEE/The Open Group 2017 AT(1P) Pages that refer to this page: batch(1p), crontab(1p) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # at\n\n> Executes commands at a specified time.\n> More information: <https://man.archlinux.org/man/at.1>.\n\n- Open an `at` prompt to create a new set of scheduled commands, press `Ctrl + D` to save and exit:\n\n`at {{hh:mm}}`\n\n- Execute the commands and email the result using a local mailing program such as Sendmail:\n\n`at {{hh:mm}} -m`\n\n- Execute a script at the given time:\n\n`at {{hh:mm}} -f {{path/to/file}}`\n\n- Display a system notification at 11pm on February 18th:\n\n`echo "notify-send '{{Wake up!}}'" | at {{11pm}} {{Feb 18}}`\n |
auditd | auditd(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training auditd(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | SIGNALS | EXIT CODES | FILES | NOTES | SEE ALSO | AUTHOR | COLOPHON AUDITD(8) System Administration Utilities AUDITD(8) NAME top auditd - The Linux Audit daemon SYNOPSIS top auditd [-f] [-l] [-n] [-s disable|enable|nochange] [-c <config_dir>] DESCRIPTION top auditd is the userspace component to the Linux Auditing System. It's responsible for writing audit records to the disk. Viewing the logs is done with the ausearch or aureport utilities. Configuring the audit system or loading rules is done with the auditctl utility. During startup, the rules in /etc/audit/audit.rules are read by auditctl and loaded into the kernel. Alternately, there is also an augenrules program that reads rules located in /etc/audit/rules.d/ and compiles them into an audit.rules file. The audit daemon itself has some configuration options that the admin may wish to customize. They are found in the auditd.conf file. OPTIONS top -f leave the audit daemon in the foreground for debugging. Messages also go to stderr rather than the audit log. -l allow the audit daemon to follow symlinks for config files. -n no fork. This is useful for running off of inittab or systemd. -s=ENABLE_STATE specify when starting if auditd should change the current value for the kernel enabled flag. Valid values for ENABLE_STATE are "disable", "enable" or "nochange". The default is to enable (and disable when auditd terminates). The value of the enabled flag may be changed during the lifetime of auditd using 'auditctl -e'. -c Specify alternate config file directory. Note that this same directory will be passed to the dispatcher. (default: /etc/audit/) SIGNALS top SIGHUP causes auditd to reconfigure. This means that auditd re- reads the configuration file. If there are no syntax errors, it will proceed to implement the requested changes. If the reconfigure is successful, a DAEMON_CONFIG event is recorded in the logs. If not successful, error handling is controlled by space_left_action, admin_space_left_action, disk_full_action, and disk_error_action parameters in auditd.conf. SIGTERM caused auditd to discontinue processing audit events, write a shutdown audit event, and exit. SIGUSR1 causes auditd to immediately rotate the logs. It will consult the max_log_file_action to see if it should keep the logs or not. SIGUSR2 causes auditd to attempt to resume logging and passing events to plugins. This is usually needed after logging has been suspended or the internal queue is overflowed. Either of these conditions depends on the applicable configuration settings. SIGCONT causes auditd to dump a report of internal state to /var/run/auditd.state. EXIT CODES top 1 Cannot adjust priority, daemonize, open audit netlink, write the pid file, start up plugins, resolve the machine name, set audit pid, or other initialization tasks. 2 Invalid or excessive command line arguments 4 The audit daemon doesn't have sufficient privilege 6 There is an error in the configuration file FILES top /etc/audit/auditd.conf - configuration file for audit daemon /etc/audit/audit.rules - audit rules to be loaded at startup /etc/audit/rules.d/ - directory holding individual sets of rules to be compiled into one file by augenrules. /etc/audit/plugins.d/ - directory holding individual plugin configuration files. /etc/audit/audit-stop - These rules are loaded when the audit daemon stops. /var/run/auditd.state - report about internal state. NOTES top A boot param of audit=1 should be added to ensure that all processes that run before the audit daemon starts is marked as auditable by the kernel. Not doing that will make a few processes impossible to properly audit. The audit daemon can receive audit events from other audit daemons via the audisp-remote plugin. The audit daemon may be linked with tcp_wrappers to control which machines can connect. If this is the case, you can add an entry to hosts.allow and deny. SEE ALSO top auditd.conf(5), auditd-plugins(5), ausearch(8), aureport(8), auditctl(8), augenrules(8), audit.rules(7). AUTHOR top Steve Grubb COLOPHON top This page is part of the audit (Linux Audit) project. Information about the project can be found at http://people.redhat.com/sgrubb/audit/. If you have a bug report for this manual page, send it to linux-audit@redhat.com. This page was obtained from the project's upstream Git repository https://github.com/linux-audit/audit-userspace.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-11-30.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org Red Hat Sept 2021 AUDITD(8) Pages that refer to this page: audit_request_status(3), audit_set_backlog_limit(3), audit_set_backlog_wait_time(3), audit_set_enabled(3), audit_set_failure(3), audit_set_pid(3), audit_set_rate_limit(3), get_auditfail_action(3), set_aumessage_mode(3), auditd.conf(5), auditd-plugins(5), zos-remote.conf(5), audit.rules(7), audispd-zos-remote(8), auditctl(8), augenrules(8), aureport(8), ausearch(8), pam_loginuid(8), systemd-update-utmp.service(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # auditd\n\n> This responds to requests from the audit utility and notifications from the kernel.\n> It should not be invoked manually.\n> More information: <https://manned.org/auditd>.\n\n- Start the daemon:\n\n`auditd`\n\n- Start the daemon in debug mode:\n\n`auditd -d`\n\n- Start the daemon on-demand from launchd:\n\n`auditd -l`\n |
awk | awk(1p) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training awk(1p) Linux manual page PROLOG | NAME | SYNOPSIS | DESCRIPTION | OPTIONS | OPERANDS | STDIN | INPUT FILES | ENVIRONMENT VARIABLES | ASYNCHRONOUS EVENTS | STDOUT | STDERR | OUTPUT FILES | EXTENDED DESCRIPTION | EXIT STATUS | CONSEQUENCES OF ERRORS | APPLICATION USAGE | EXAMPLES | RATIONALE | FUTURE DIRECTIONS | SEE ALSO | COPYRIGHT AWK(1P) POSIX Programmer's Manual AWK(1P) PROLOG top This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux. NAME top awk pattern scanning and processing language SYNOPSIS top awk [-F sepstring] [-v assignment]... program [argument...] awk [-F sepstring] -f progfile [-f progfile]... [-v assignment]... [argument...] DESCRIPTION top The awk utility shall execute programs written in the awk programming language, which is specialized for textual data manipulation. An awk program is a sequence of patterns and corresponding actions. When input is read that matches a pattern, the action associated with that pattern is carried out. Input shall be interpreted as a sequence of records. By default, a record is a line, less its terminating <newline>, but this can be changed by using the RS built-in variable. Each record of input shall be matched in turn against each pattern in the program. For each pattern matched, the associated action shall be executed. The awk utility shall interpret each input record as a sequence of fields where, by default, a field is a string of non-<blank> non-<newline> characters. This default <blank> and <newline> field delimiter can be changed by using the FS built-in variable or the -F sepstring option. The awk utility shall denote the first field in a record $1, the second $2, and so on. The symbol $0 shall refer to the entire record; setting any other field causes the re-evaluation of $0. Assigning to $0 shall reset the values of all other fields and the NF built-in variable. OPTIONS top The awk utility shall conform to the Base Definitions volume of POSIX.12017, Section 12.2, Utility Syntax Guidelines. The following options shall be supported: -F sepstring Define the input field separator. This option shall be equivalent to: -v FS=sepstring except that if -F sepstring and -v FS=sepstring are both used, it is unspecified whether the FS assignment resulting from -F sepstring is processed in command line order or is processed after the last -v FS=sepstring. See the description of the FS built-in variable, and how it is used, in the EXTENDED DESCRIPTION section. -f progfile Specify the pathname of the file progfile containing an awk program. A pathname of '-' shall denote the standard input. If multiple instances of this option are specified, the concatenation of the files specified as progfile in the order specified shall be the awk program. The awk program can alternatively be specified in the command line as a single argument. -v assignment The application shall ensure that the assignment argument is in the same form as an assignment operand. The specified variable assignment shall occur prior to executing the awk program, including the actions associated with BEGIN patterns (if any). Multiple occurrences of this option can be specified. OPERANDS top The following operands shall be supported: program If no -f option is specified, the first operand to awk shall be the text of the awk program. The application shall supply the program operand as a single argument to awk. If the text does not end in a <newline>, awk shall interpret the text as if it did. argument Either of the following two types of argument can be intermixed: file A pathname of a file that contains the input to be read, which is matched against the set of patterns in the program. If no file operands are specified, or if a file operand is '-', the standard input shall be used. assignment An operand that begins with an <underscore> or alphabetic character from the portable character set (see the table in the Base Definitions volume of POSIX.12017, Section 6.1, Portable Character Set), followed by a sequence of underscores, digits, and alphabetics from the portable character set, followed by the '=' character, shall specify a variable assignment rather than a pathname. The characters before the '=' represent the name of an awk variable; if that name is an awk reserved word (see Grammar) the behavior is undefined. The characters following the <equals-sign> shall be interpreted as if they appeared in the awk program preceded and followed by a double-quote ('"') character, as a STRING token (see Grammar), except that if the last character is an unescaped <backslash>, it shall be interpreted as a literal <backslash> rather than as the first character of the sequence "\"". The variable shall be assigned the value of that STRING token and, if appropriate, shall be considered a numeric string (see Expressions in awk), the variable shall also be assigned its numeric value. Each such variable assignment shall occur just prior to the processing of the following file, if any. Thus, an assignment before the first file argument shall be executed after the BEGIN actions (if any), while an assignment after the last file argument shall occur before the END actions (if any). If there are no file arguments, assignments shall be executed before processing the standard input. STDIN top The standard input shall be used only if no file operands are specified, or if a file operand is '-', or if a progfile option- argument is '-'; see the INPUT FILES section. If the awk program contains no actions and no patterns, but is otherwise a valid awk program, standard input and any file operands shall not be read and awk shall exit with a return status of zero. INPUT FILES top Input files to the awk program from any of the following sources shall be text files: * Any file operands or their equivalents, achieved by modifying the awk variables ARGV and ARGC * Standard input in the absence of any file operands * Arguments to the getline function Whether the variable RS is set to a value other than a <newline> or not, for these files, implementations shall support records terminated with the specified separator up to {LINE_MAX} bytes and may support longer records. If -f progfile is specified, the application shall ensure that the files named by each of the progfile option-arguments are text files and their concatenation, in the same order as they appear in the arguments, is an awk program. ENVIRONMENT VARIABLES top The following environment variables shall affect the execution of awk: LANG Provide a default value for the internationalization variables that are unset or null. (See the Base Definitions volume of POSIX.12017, Section 8.2, Internationalization Variables for the precedence of internationalization variables used to determine the values of locale categories.) LC_ALL If set to a non-empty string value, override the values of all the other internationalization variables. LC_COLLATE Determine the locale for the behavior of ranges, equivalence classes, and multi-character collating elements within regular expressions and in comparisons of string values. LC_CTYPE Determine the locale for the interpretation of sequences of bytes of text data as characters (for example, single-byte as opposed to multi-byte characters in arguments and input files), the behavior of character classes within regular expressions, the identification of characters as letters, and the mapping of uppercase and lowercase characters for the toupper and tolower functions. LC_MESSAGES Determine the locale that should be used to affect the format and contents of diagnostic messages written to standard error. LC_NUMERIC Determine the radix character used when interpreting numeric input, performing conversions between numeric and string values, and formatting numeric output. Regardless of locale, the <period> character (the decimal-point character of the POSIX locale) is the decimal-point character recognized in processing awk programs (including assignments in command line arguments). NLSPATH Determine the location of message catalogs for the processing of LC_MESSAGES. PATH Determine the search path when looking for commands executed by system(expr), or input and output pipes; see the Base Definitions volume of POSIX.12017, Chapter 8, Environment Variables. In addition, all environment variables shall be visible via the awk variable ENVIRON. ASYNCHRONOUS EVENTS top Default. STDOUT top The nature of the output files depends on the awk program. STDERR top The standard error shall be used only for diagnostic messages. OUTPUT FILES top The nature of the output files depends on the awk program. EXTENDED DESCRIPTION top Overall Program Structure An awk program is composed of pairs of the form: pattern { action } Either the pattern or the action (including the enclosing brace characters) can be omitted. A missing pattern shall match any record of input, and a missing action shall be equivalent to: { print } Execution of the awk program shall start by first executing the actions associated with all BEGIN patterns in the order they occur in the program. Then each file operand (or standard input if no files were specified) shall be processed in turn by reading data from the file until a record separator is seen (<newline> by default). Before the first reference to a field in the record is evaluated, the record shall be split into fields, according to the rules in Regular Expressions, using the value of FS that was current at the time the record was read. Each pattern in the program then shall be evaluated in the order of occurrence, and the action associated with each pattern that matches the current record executed. The action for a matching pattern shall be executed before evaluating subsequent patterns. Finally, the actions associated with all END patterns shall be executed in the order they occur in the program. Expressions in awk Expressions describe computations used in patterns and actions. In the following table, valid expression operations are given in groups from highest precedence first to lowest precedence last, with equal-precedence operators grouped between horizontal lines. In expression evaluation, where the grammar is formally ambiguous, higher precedence operators shall be evaluated before lower precedence operators. In this table expr, expr1, expr2, and expr3 represent any expression, while lvalue represents any entity that can be assigned to (that is, on the left side of an assignment operator). The precise syntax of expressions is given in Grammar. Table 4-1: Expressions in Decreasing Precedence in awk Syntax Name Type of Result Associativity ( expr ) Grouping Type of expr N/A $expr Field reference String N/A lvalue ++ Post-increment Numeric N/A lvalue -- Post-decrement Numeric N/A ++ lvalue Pre-increment Numeric N/A -- lvalue Pre-decrement Numeric N/A expr ^ expr Exponentiation Numeric Right ! expr Logical not Numeric N/A + expr Unary plus Numeric N/A - expr Unary minus Numeric N/A expr * expr Multiplication Numeric Left expr / expr Division Numeric Left expr % expr Modulus Numeric Left expr + expr Addition Numeric Left expr - expr Subtraction Numeric Left expr expr String concatenation String Left expr < expr Less than Numeric None expr <= expr Less than or equal to Numeric None expr != expr Not equal to Numeric None expr == expr Equal to Numeric None expr > expr Greater than Numeric None expr >= expr Greater than or equal to Numeric None expr ~ expr ERE match Numeric None expr !~ expr ERE non-match Numeric None expr in array Array membership Numeric Left ( index ) in array Multi-dimension array Numeric Left membership expr && expr Logical AND Numeric Left expr || expr Logical OR Numeric Left expr1 ? expr2 : expr3Conditional expression Type of selectedRight expr2 or expr3 lvalue ^= expr Exponentiation assignmentNumeric Right lvalue %= expr Modulus assignment Numeric Right lvalue *= expr Multiplication assignmentNumeric Right lvalue /= expr Division assignment Numeric Right lvalue += expr Addition assignment Numeric Right lvalue -= expr Subtraction assignment Numeric Right lvalue = expr Assignment Type of expr Right Each expression shall have either a string value, a numeric value, or both. Except as stated for specific contexts, the value of an expression shall be implicitly converted to the type needed for the context in which it is used. A string value shall be converted to a numeric value either by the equivalent of the following calls to functions defined by the ISO C standard: setlocale(LC_NUMERIC, ""); numeric_value = atof(string_value); or by converting the initial portion of the string to type double representation as follows: The input string is decomposed into two parts: an initial, possibly empty, sequence of white-space characters (as specified by isspace()) and a subject sequence interpreted as a floating-point constant. The expected form of the subject sequence is an optional '+' or '-' sign, then a non-empty sequence of digits optionally containing a <period>, then an optional exponent part. An exponent part consists of 'e' or 'E', followed by an optional sign, followed by one or more decimal digits. The sequence starting with the first digit or the <period> (whichever occurs first) is interpreted as a floating constant of the C language, and if neither an exponent part nor a <period> appears, a <period> is assumed to follow the last digit in the string. If the subject sequence begins with a <hyphen-minus>, the value resulting from the conversion is negated. A numeric value that is exactly equal to the value of an integer (see Section 1.1.2, Concepts Derived from the ISO C Standard) shall be converted to a string by the equivalent of a call to the sprintf function (see String Functions) with the string "%d" as the fmt argument and the numeric value being converted as the first and only expr argument. Any other numeric value shall be converted to a string by the equivalent of a call to the sprintf function with the value of the variable CONVFMT as the fmt argument and the numeric value being converted as the first and only expr argument. The result of the conversion is unspecified if the value of CONVFMT is not a floating-point format specification. This volume of POSIX.12017 specifies no explicit conversions between numbers and strings. An application can force an expression to be treated as a number by adding zero to it, or can force it to be treated as a string by concatenating the null string ("") to it. A string value shall be considered a numeric string if it comes from one of the following: 1. Field variables 2. Input from the getline() function 3. FILENAME 4. ARGV array elements 5. ENVIRON array elements 6. Array elements created by the split() function 7. A command line variable assignment 8. Variable assignment from another numeric string variable and an implementation-dependent condition corresponding to either case (a) or (b) below is met. a. After the equivalent of the following calls to functions defined by the ISO C standard, string_value_end would differ from string_value, and any characters before the terminating null character in string_value_end would be <blank> characters: char *string_value_end; setlocale(LC_NUMERIC, ""); numeric_value = strtod (string_value, &string_value_end); b. After all the following conversions have been applied, the resulting string would lexically be recognized as a NUMBER token as described by the lexical conventions in Grammar: -- All leading and trailing <blank> characters are discarded. -- If the first non-<blank> is '+' or '-', it is discarded. -- Each occurrence of the decimal point character from the current locale is changed to a <period>. In case (a) the numeric value of the numeric string shall be the value that would be returned by the strtod() call. In case (b) if the first non-<blank> is '-', the numeric value of the numeric string shall be the negation of the numeric value of the recognized NUMBER token; otherwise, the numeric value of the numeric string shall be the numeric value of the recognized NUMBER token. Whether or not a string is a numeric string shall be relevant only in contexts where that term is used in this section. When an expression is used in a Boolean context, if it has a numeric value, a value of zero shall be treated as false and any other value shall be treated as true. Otherwise, a string value of the null string shall be treated as false and any other value shall be treated as true. A Boolean context shall be one of the following: * The first subexpression of a conditional expression * An expression operated on by logical NOT, logical AND, or logical OR * The second expression of a for statement * The expression of an if statement * The expression of the while clause in either a while or do...while statement * An expression used as a pattern (as in Overall Program Structure) All arithmetic shall follow the semantics of floating-point arithmetic as specified by the ISO C standard (see Section 1.1.2, Concepts Derived from the ISO C Standard). The value of the expression: expr1 ^ expr2 shall be equivalent to the value returned by the ISO C standard function call: pow(expr1, expr2) The expression: lvalue ^= expr shall be equivalent to the ISO C standard expression: lvalue = pow(lvalue, expr) except that lvalue shall be evaluated only once. The value of the expression: expr1 % expr2 shall be equivalent to the value returned by the ISO C standard function call: fmod(expr1, expr2) The expression: lvalue %= expr shall be equivalent to the ISO C standard expression: lvalue = fmod(lvalue, expr) except that lvalue shall be evaluated only once. Variables and fields shall be set by the assignment statement: lvalue = expression and the type of expression shall determine the resulting variable type. The assignment includes the arithmetic assignments ("+=", "-=", "*=", "/=", "%=", "^=", "++", "--") all of which shall produce a numeric result. The left-hand side of an assignment and the target of increment and decrement operators can be one of a variable, an array with index, or a field selector. The awk language supplies arrays that are used for storing numbers or strings. Arrays need not be declared. They shall initially be empty, and their sizes shall change dynamically. The subscripts, or element identifiers, are strings, providing a type of associative array capability. An array name followed by a subscript within square brackets can be used as an lvalue and thus as an expression, as described in the grammar; see Grammar. Unsubscripted array names can be used in only the following contexts: * A parameter in a function definition or function call * The NAME token following any use of the keyword in as specified in the grammar (see Grammar); if the name used in this context is not an array name, the behavior is undefined A valid array index shall consist of one or more <comma>-separated expressions, similar to the way in which multi- dimensional arrays are indexed in some programming languages. Because awk arrays are really one-dimensional, such a <comma>-separated list shall be converted to a single string by concatenating the string values of the separate expressions, each separated from the other by the value of the SUBSEP variable. Thus, the following two index operations shall be equivalent: var[expr1, expr2, ... exprn] var[expr1 SUBSEP expr2 SUBSEP ... SUBSEP exprn] The application shall ensure that a multi-dimensioned index used with the in operator is parenthesized. The in operator, which tests for the existence of a particular array element, shall not cause that element to exist. Any other reference to a nonexistent array element shall automatically create it. Comparisons (with the '<', "<=", "!=", "==", '>', and ">=" operators) shall be made numerically if both operands are numeric, if one is numeric and the other has a string value that is a numeric string, or if one is numeric and the other has the uninitialized value. Otherwise, operands shall be converted to strings as required and a string comparison shall be made as follows: * For the "!=" and "==" operators, the strings should be compared to check if they are identical but may be compared using the locale-specific collation sequence to check if they collate equally. * For the other operators, the strings shall be compared using the locale-specific collation sequence. The value of the comparison expression shall be 1 if the relation is true, or 0 if the relation is false. Variables and Special Variables Variables can be used in an awk program by referencing them. With the exception of function parameters (see User-Defined Functions), they are not explicitly declared. Function parameter names shall be local to the function; all other variable names shall be global. The same name shall not be used as both a function parameter name and as the name of a function or a special awk variable. The same name shall not be used both as a variable name with global scope and as the name of a function. The same name shall not be used within the same scope both as a scalar variable and as an array. Uninitialized variables, including scalar variables, array elements, and field variables, shall have an uninitialized value. An uninitialized value shall have both a numeric value of zero and a string value of the empty string. Evaluation of variables with an uninitialized value, to either string or numeric, shall be determined by the context in which they are used. Field variables shall be designated by a '$' followed by a number or numerical expression. The effect of the field number expression evaluating to anything other than a non-negative integer is unspecified; uninitialized variables or string values need not be converted to numeric values in this context. New field variables can be created by assigning a value to them. References to nonexistent fields (that is, fields after $NF), shall evaluate to the uninitialized value. Such references shall not create new fields. However, assigning to a nonexistent field (for example, $(NF+2)=5) shall increase the value of NF; create any intervening fields with the uninitialized value; and cause the value of $0 to be recomputed, with the fields being separated by the value of OFS. Each field variable shall have a string value or an uninitialized value when created. Field variables shall have the uninitialized value when created from $0 using FS and the variable does not contain any characters. If appropriate, the field variable shall be considered a numeric string (see Expressions in awk). Implementations shall support the following other special variables that are set by awk: ARGC The number of elements in the ARGV array. ARGV An array of command line arguments, excluding options and the program argument, numbered from zero to ARGC-1. The arguments in ARGV can be modified or added to; ARGC can be altered. As each input file ends, awk shall treat the next non-null element of ARGV, up to the current value of ARGC-1, inclusive, as the name of the next input file. Thus, setting an element of ARGV to null means that it shall not be treated as an input file. The name '-' indicates the standard input. If an argument matches the format of an assignment operand, this argument shall be treated as an assignment rather than a file argument. CONVFMT The printf format for converting numbers to strings (except for output statements, where OFMT is used); "%.6g" by default. ENVIRON An array representing the value of the environment, as described in the exec functions defined in the System Interfaces volume of POSIX.12017. The indices of the array shall be strings consisting of the names of the environment variables, and the value of each array element shall be a string consisting of the value of that variable. If appropriate, the environment variable shall be considered a numeric string (see Expressions in awk); the array element shall also have its numeric value. In all cases where the behavior of awk is affected by environment variables (including the environment of any commands that awk executes via the system function or via pipeline redirections with the print statement, the printf statement, or the getline function), the environment used shall be the environment at the time awk began executing; it is implementation-defined whether any modification of ENVIRON affects this environment. FILENAME A pathname of the current input file. Inside a BEGIN action the value is undefined. Inside an END action the value shall be the name of the last input file processed. FNR The ordinal number of the current record in the current file. Inside a BEGIN action the value shall be zero. Inside an END action the value shall be the number of the last record processed in the last file processed. FS Input field separator regular expression; a <space> by default. NF The number of fields in the current record. Inside a BEGIN action, the use of NF is undefined unless a getline function without a var argument is executed previously. Inside an END action, NF shall retain the value it had for the last record read, unless a subsequent, redirected, getline function without a var argument is performed prior to entering the END action. NR The ordinal number of the current record from the start of input. Inside a BEGIN action the value shall be zero. Inside an END action the value shall be the number of the last record processed. OFMT The printf format for converting numbers to strings in output statements (see Output Statements); "%.6g" by default. The result of the conversion is unspecified if the value of OFMT is not a floating-point format specification. OFS The print statement output field separator; <space> by default. ORS The print statement output record separator; a <newline> by default. RLENGTH The length of the string matched by the match function. RS The first character of the string value of RS shall be the input record separator; a <newline> by default. If RS contains more than one character, the results are unspecified. If RS is null, then records are separated by sequences consisting of a <newline> plus one or more blank lines, leading or trailing blank lines shall not result in empty records at the beginning or end of the input, and a <newline> shall always be a field separator, no matter what the value of FS is. RSTART The starting position of the string matched by the match function, numbering from 1. This shall always be equivalent to the return value of the match function. SUBSEP The subscript separator string for multi-dimensional arrays; the default value is implementation-defined. Regular Expressions The awk utility shall make use of the extended regular expression notation (see the Base Definitions volume of POSIX.12017, Section 9.4, Extended Regular Expressions) except that it shall allow the use of C-language conventions for escaping special characters within the EREs, as specified in the table in the Base Definitions volume of POSIX.12017, Chapter 5, File Format Notation ('\\', '\a', '\b', '\f', '\n', '\r', '\t', '\v') and the following table; these escape sequences shall be recognized both inside and outside bracket expressions. Note that records need not be separated by <newline> characters and string constants can contain <newline> characters, so even the "\n" sequence is valid in awk EREs. Using a <slash> character within an ERE requires the escaping shown in the following table. Table 4-2: Escape Sequences in awk Escape Sequence Description Meaning \" <backslash> <quotation-mark> <quotation-mark> character \/ <backslash> <slash> <slash> character \ddd A <backslash> character followed The character whose encoding is by the longest sequence of one, represented by the one, two, or two, or three octal-digit three-digit octal integer. Multi- characters (01234567). If all of byte characters require multiple, the digits are 0 (that is, concatenated escape sequences of representation of the NUL this type, including the leading character), the behavior is <backslash> for each byte. undefined. \c A <backslash> character followed Undefined by any character not described in this table or in the table in the Base Definitions volume of POSIX.12017, Chapter 5, File Format Notation ('\\', '\a', '\b', '\f', '\n', '\r', '\t', '\v'). A regular expression can be matched against a specific field or string by using one of the two regular expression matching operators, '~' and "!~". These operators shall interpret their right-hand operand as a regular expression and their left-hand operand as a string. If the regular expression matches the string, the '~' expression shall evaluate to a value of 1, and the "!~" expression shall evaluate to a value of 0. (The regular expression matching operation is as defined by the term matched in the Base Definitions volume of POSIX.12017, Section 9.1, Regular Expression Definitions, where a match occurs on any part of the string unless the regular expression is limited with the <circumflex> or <dollar-sign> special characters.) If the regular expression does not match the string, the '~' expression shall evaluate to a value of 0, and the "!~" expression shall evaluate to a value of 1. If the right-hand operand is any expression other than the lexical token ERE, the string value of the expression shall be interpreted as an extended regular expression, including the escape conventions described above. Note that these same escape conventions shall also be applied in determining the value of a string literal (the lexical token STRING), and thus shall be applied a second time when a string literal is used in this context. When an ERE token appears as an expression in any context other than as the right-hand of the '~' or "!~" operator or as one of the built-in function arguments described below, the value of the resulting expression shall be the equivalent of: $0 ~ /ere/ The ere argument to the gsub, match, sub functions, and the fs argument to the split function (see String Functions) shall be interpreted as extended regular expressions. These can be either ERE tokens or arbitrary expressions, and shall be interpreted in the same manner as the right-hand side of the '~' or "!~" operator. An extended regular expression can be used to separate fields by assigning a string containing the expression to the built-in variable FS, either directly or as a consequence of using the -F sepstring option. The default value of the FS variable shall be a single <space>. The following describes FS behavior: 1. If FS is a null string, the behavior is unspecified. 2. If FS is a single character: a. If FS is <space>, skip leading and trailing <blank> and <newline> characters; fields shall be delimited by sets of one or more <blank> or <newline> characters. b. Otherwise, if FS is any other character c, fields shall be delimited by each single occurrence of c. 3. Otherwise, the string value of FS shall be considered to be an extended regular expression. Each occurrence of a sequence matching the extended regular expression shall delimit fields. Except for the '~' and "!~" operators, and in the gsub, match, split, and sub built-in functions, ERE matching shall be based on input records; that is, record separator characters (the first character of the value of the variable RS, <newline> by default) cannot be embedded in the expression, and no expression shall match the record separator character. If the record separator is not <newline>, <newline> characters embedded in the expression can be matched. For the '~' and "!~" operators, and in those four built-in functions, ERE matching shall be based on text strings; that is, any character (including <newline> and the record separator) can be embedded in the pattern, and an appropriate pattern shall match any character. However, in all awk ERE matching, the use of one or more NUL characters in the pattern, input record, or text string produces undefined results. Patterns A pattern is any valid expression, a range specified by two expressions separated by a comma, or one of the two special patterns BEGIN or END. Special Patterns The awk utility shall recognize two special patterns, BEGIN and END. Each BEGIN pattern shall be matched once and its associated action executed before the first record of input is readexcept possibly by use of the getline function (see Input/Output and General Functions) in a prior BEGIN actionand before command line assignment is done. Each END pattern shall be matched once and its associated action executed after the last record of input has been read. These two patterns shall have associated actions. BEGIN and END shall not combine with other patterns. Multiple BEGIN and END patterns shall be allowed. The actions associated with the BEGIN patterns shall be executed in the order specified in the program, as are the END actions. An END pattern can precede a BEGIN pattern in a program. If an awk program consists of only actions with the pattern BEGIN, and the BEGIN action contains no getline function, awk shall exit without reading its input when the last statement in the last BEGIN action is executed. If an awk program consists of only actions with the pattern END or only actions with the patterns BEGIN and END, the input shall be read before the statements in the END actions are executed. Expression Patterns An expression pattern shall be evaluated as if it were an expression in a Boolean context. If the result is true, the pattern shall be considered to match, and the associated action (if any) shall be executed. If the result is false, the action shall not be executed. Pattern Ranges A pattern range consists of two expressions separated by a comma; in this case, the action shall be performed for all records between a match of the first expression and the following match of the second expression, inclusive. At this point, the pattern range can be repeated starting at input records subsequent to the end of the matched range. Actions An action is a sequence of statements as shown in the grammar in Grammar. Any single statement can be replaced by a statement list enclosed in curly braces. The application shall ensure that statements in a statement list are separated by <newline> or <semicolon> characters. Statements in a statement list shall be executed sequentially in the order that they appear. The expression acting as the conditional in an if statement shall be evaluated and if it is non-zero or non-null, the following statement shall be executed; otherwise, if else is present, the statement following the else shall be executed. The if, while, do...while, for, break, and continue statements are based on the ISO C standard (see Section 1.1.2, Concepts Derived from the ISO C Standard), except that the Boolean expressions shall be treated as described in Expressions in awk, and except in the case of: for (variable in array) which shall iterate, assigning each index of array to variable in an unspecified order. The results of adding new elements to array within such a for loop are undefined. If a break or continue statement occurs outside of a loop, the behavior is undefined. The delete statement shall remove an individual array element. Thus, the following code deletes an entire array: for (index in array) delete array[index] The next statement shall cause all further processing of the current input record to be abandoned. The behavior is undefined if a next statement appears or is invoked in a BEGIN or END action. The exit statement shall invoke all END actions in the order in which they occur in the program source and then terminate the program without reading further input. An exit statement inside an END action shall terminate the program without further execution of END actions. If an expression is specified in an exit statement, its numeric value shall be the exit status of awk, unless subsequent errors are encountered or a subsequent exit statement with an expression is executed. Output Statements Both print and printf statements shall write to standard output by default. The output shall be written to the location specified by output_redirection if one is supplied, as follows: > expression >> expression | expression In all cases, the expression shall be evaluated to produce a string that is used as a pathname into which to write (for '>' or ">>") or as a command to be executed (for '|'). Using the first two forms, if the file of that name is not currently open, it shall be opened, creating it if necessary and using the first form, truncating the file. The output then shall be appended to the file. As long as the file remains open, subsequent calls in which expression evaluates to the same string value shall simply append output to the file. The file remains open until the close function (see Input/Output and General Functions) is called with an expression that evaluates to the same string value. The third form shall write output onto a stream piped to the input of a command. The stream shall be created if no stream is currently open with the value of expression as its command name. The stream created shall be equivalent to one created by a call to the popen() function defined in the System Interfaces volume of POSIX.12017 with the value of expression as the command argument and a value of w as the mode argument. As long as the stream remains open, subsequent calls in which expression evaluates to the same string value shall write output to the existing stream. The stream shall remain open until the close function (see Input/Output and General Functions) is called with an expression that evaluates to the same string value. At that time, the stream shall be closed as if by a call to the pclose() function defined in the System Interfaces volume of POSIX.12017. As described in detail by the grammar in Grammar, these output statements shall take a <comma>-separated list of expressions referred to in the grammar by the non-terminal symbols expr_list, print_expr_list, or print_expr_list_opt. This list is referred to here as the expression list, and each member is referred to as an expression argument. The print statement shall write the value of each expression argument onto the indicated output stream separated by the current output field separator (see variable OFS above), and terminated by the output record separator (see variable ORS above). All expression arguments shall be taken as strings, being converted if necessary; this conversion shall be as described in Expressions in awk, with the exception that the printf format in OFMT shall be used instead of the value in CONVFMT. An empty expression list shall stand for the whole input record ($0). The printf statement shall produce output based on a notation similar to the File Format Notation used to describe file formats in this volume of POSIX.12017 (see the Base Definitions volume of POSIX.12017, Chapter 5, File Format Notation). Output shall be produced as specified with the first expression argument as the string format and subsequent expression arguments as the strings arg1 to argn, inclusive, with the following exceptions: 1. The format shall be an actual character string rather than a graphical representation. Therefore, it cannot contain empty character positions. The <space> in the format string, in any context other than a flag of a conversion specification, shall be treated as an ordinary character that is copied to the output. 2. If the character set contains a '' character and that character appears in the format string, it shall be treated as an ordinary character that is copied to the output. 3. The escape sequences beginning with a <backslash> character shall be treated as sequences of ordinary characters that are copied to the output. Note that these same sequences shall be interpreted lexically by awk when they appear in literal strings, but they shall not be treated specially by the printf statement. 4. A field width or precision can be specified as the '*' character instead of a digit string. In this case the next argument from the expression list shall be fetched and its numeric value taken as the field width or precision. 5. The implementation shall not precede or follow output from the d or u conversion specifier characters with <blank> characters not specified by the format string. 6. The implementation shall not precede output from the o conversion specifier character with leading zeros not specified by the format string. 7. For the c conversion specifier character: if the argument has a numeric value, the character whose encoding is that value shall be output. If the value is zero or is not the encoding of any character in the character set, the behavior is undefined. If the argument does not have a numeric value, the first character of the string value shall be output; if the string does not contain any characters, the behavior is undefined. 8. For each conversion specification that consumes an argument, the next expression argument shall be evaluated. With the exception of the c conversion specifier character, the value shall be converted (according to the rules specified in Expressions in awk) to the appropriate type for the conversion specification. 9. If there are insufficient expression arguments to satisfy all the conversion specifications in the format string, the behavior is undefined. 10. If any character sequence in the format string begins with a '%' character, but does not form a valid conversion specification, the behavior is unspecified. Both print and printf can output at least {LINE_MAX} bytes. Functions The awk language has a variety of built-in functions: arithmetic, string, input/output, and general. Arithmetic Functions The arithmetic functions, except for int, shall be based on the ISO C standard (see Section 1.1.2, Concepts Derived from the ISO C Standard). The behavior is undefined in cases where the ISO C standard specifies that an error be returned or that the behavior is undefined. Although the grammar (see Grammar) permits built-in functions to appear with no arguments or parentheses, unless the argument or parentheses are indicated as optional in the following list (by displaying them within the "[]" brackets), such use is undefined. atan2(y,x) Return arctangent of y/x in radians in the range [-,]. cos(x) Return cosine of x, where x is in radians. sin(x) Return sine of x, where x is in radians. exp(x) Return the exponential function of x. log(x) Return the natural logarithm of x. sqrt(x) Return the square root of x. int(x) Return the argument truncated to an integer. Truncation shall be toward 0 when x>0. rand() Return a random number n, such that 0n<1. srand([expr]) Set the seed value for rand to expr or use the time of day if expr is omitted. The previous seed value shall be returned. String Functions The string functions in the following list shall be supported. Although the grammar (see Grammar) permits built-in functions to appear with no arguments or parentheses, unless the argument or parentheses are indicated as optional in the following list (by displaying them within the "[]" brackets), such use is undefined. gsub(ere, repl[, in]) Behave like sub (see below), except that it shall replace all occurrences of the regular expression (like the ed utility global substitute) in $0 or in the in argument, when specified. index(s, t) Return the position, in characters, numbering from 1, in string s where string t first occurs, or zero if it does not occur at all. length[([s])] Return the length, in characters, of its argument taken as a string, or of the whole record, $0, if there is no argument. match(s, ere) Return the position, in characters, numbering from 1, in string s where the extended regular expression ere occurs, or zero if it does not occur at all. RSTART shall be set to the starting position (which is the same as the returned value), zero if no match is found; RLENGTH shall be set to the length of the matched string, -1 if no match is found. split(s, a[, fs ]) Split the string s into array elements a[1], a[2], ..., a[n], and return n. All elements of the array shall be deleted before the split is performed. The separation shall be done with the ERE fs or with the field separator FS if fs is not given. Each array element shall have a string value when created and, if appropriate, the array element shall be considered a numeric string (see Expressions in awk). The effect of a null string as the value of fs is unspecified. sprintf(fmt, expr, expr, ...) Format the expressions according to the printf format given by fmt and return the resulting string. sub(ere, repl[, in ]) Substitute the string repl in place of the first instance of the extended regular expression ERE in string in and return the number of substitutions. An <ampersand> ('&') appearing in the string repl shall be replaced by the string from in that matches the ERE. An <ampersand> preceded with a <backslash> shall be interpreted as the literal <ampersand> character. An occurrence of two consecutive <backslash> characters shall be interpreted as just a single literal <backslash> character. Any other occurrence of a <backslash> (for example, preceding any other character) shall be treated as a literal <backslash> character. Note that if repl is a string literal (the lexical token STRING; see Grammar), the handling of the <ampersand> character occurs after any lexical processing, including any lexical <backslash>-escape sequence processing. If in is specified and it is not an lvalue (see Expressions in awk), the behavior is undefined. If in is omitted, awk shall use the current record ($0) in its place. substr(s, m[, n ]) Return the at most n-character substring of s that begins at position m, numbering from 1. If n is omitted, or if n specifies more characters than are left in the string, the length of the substring shall be limited by the length of the string s. tolower(s) Return a string based on the string s. Each character in s that is an uppercase letter specified to have a tolower mapping by the LC_CTYPE category of the current locale shall be replaced in the returned string by the lowercase letter specified by the mapping. Other characters in s shall be unchanged in the returned string. toupper(s) Return a string based on the string s. Each character in s that is a lowercase letter specified to have a toupper mapping by the LC_CTYPE category of the current locale is replaced in the returned string by the uppercase letter specified by the mapping. Other characters in s are unchanged in the returned string. All of the preceding functions that take ERE as a parameter expect a pattern or a string valued expression that is a regular expression as defined in Regular Expressions. Input/Output and General Functions The input/output and general functions are: close(expression) Close the file or pipe opened by a print or printf statement or a call to getline with the same string- valued expression. The limit on the number of open expression arguments is implementation-defined. If the close was successful, the function shall return zero; otherwise, it shall return non-zero. expression | getline [var] Read a record of input from a stream piped from the output of a command. The stream shall be created if no stream is currently open with the value of expression as its command name. The stream created shall be equivalent to one created by a call to the popen() function with the value of expression as the command argument and a value of r as the mode argument. As long as the stream remains open, subsequent calls in which expression evaluates to the same string value shall read subsequent records from the stream. The stream shall remain open until the close function is called with an expression that evaluates to the same string value. At that time, the stream shall be closed as if by a call to the pclose() function. If var is omitted, $0 and NF shall be set; otherwise, var shall be set and, if appropriate, it shall be considered a numeric string (see Expressions in awk). The getline operator can form ambiguous constructs when there are unparenthesized operators (including concatenate) to the left of the '|' (to the beginning of the expression containing getline). In the context of the '$' operator, '|' shall behave as if it had a lower precedence than '$'. The result of evaluating other operators is unspecified, and conforming applications shall parenthesize properly all such usages. getline Set $0 to the next input record from the current input file. This form of getline shall set the NF, NR, and FNR variables. getline var Set variable var to the next input record from the current input file and, if appropriate, var shall be considered a numeric string (see Expressions in awk). This form of getline shall set the FNR and NR variables. getline [var] < expression Read the next record of input from a named file. The expression shall be evaluated to produce a string that is used as a pathname. If the file of that name is not currently open, it shall be opened. As long as the stream remains open, subsequent calls in which expression evaluates to the same string value shall read subsequent records from the file. The file shall remain open until the close function is called with an expression that evaluates to the same string value. If var is omitted, $0 and NF shall be set; otherwise, var shall be set and, if appropriate, it shall be considered a numeric string (see Expressions in awk). The getline operator can form ambiguous constructs when there are unparenthesized binary operators (including concatenate) to the right of the '<' (up to the end of the expression containing the getline). The result of evaluating such a construct is unspecified, and conforming applications shall parenthesize properly all such usages. system(expression) Execute the command given by expression in a manner equivalent to the system() function defined in the System Interfaces volume of POSIX.12017 and return the exit status of the command. All forms of getline shall return 1 for successful input, zero for end-of-file, and -1 for an error. Where strings are used as the name of a file or pipeline, the application shall ensure that the strings are textually identical. The terminology ``same string value'' implies that ``equivalent strings'', even those that differ only by <space> characters, represent different files. User-Defined Functions The awk language also provides user-defined functions. Such functions can be defined as: function name([parameter, ...]) { statements } A function can be referred to anywhere in an awk program; in particular, its use can precede its definition. The scope of a function is global. Function parameters, if present, can be either scalars or arrays; the behavior is undefined if an array name is passed as a parameter that the function uses as a scalar, or if a scalar expression is passed as a parameter that the function uses as an array. Function parameters shall be passed by value if scalar and by reference if array name. The number of parameters in the function definition need not match the number of parameters in the function call. Excess formal parameters can be used as local variables. If fewer arguments are supplied in a function call than are in the function definition, the extra parameters that are used in the function body as scalars shall evaluate to the uninitialized value until they are otherwise initialized, and the extra parameters that are used in the function body as arrays shall be treated as uninitialized arrays where each element evaluates to the uninitialized value until otherwise initialized. When invoking a function, no white space can be placed between the function name and the opening parenthesis. Function calls can be nested and recursive calls can be made upon functions. Upon return from any nested or recursive function call, the values of all of the calling function's parameters shall be unchanged, except for array parameters passed by reference. The return statement can be used to return a value. If a return statement appears outside of a function definition, the behavior is undefined. In the function definition, <newline> characters shall be optional before the opening brace and after the closing brace. Function definitions can appear anywhere in the program where a pattern-action pair is allowed. Grammar The grammar in this section and the lexical conventions in the following section shall together describe the syntax for awk programs. The general conventions for this style of grammar are described in Section 1.3, Grammar Conventions. A valid program can be represented as the non-terminal symbol program in the grammar. This formal syntax shall take precedence over the preceding text syntax description. %token NAME NUMBER STRING ERE %token FUNC_NAME /* Name followed by '(' without white space. */ /* Keywords */ %token Begin End /* 'BEGIN' 'END' */ %token Break Continue Delete Do Else /* 'break' 'continue' 'delete' 'do' 'else' */ %token Exit For Function If In /* 'exit' 'for' 'function' 'if' 'in' */ %token Next Print Printf Return While /* 'next' 'print' 'printf' 'return' 'while' */ /* Reserved function names */ %token BUILTIN_FUNC_NAME /* One token for the following: * atan2 cos sin exp log sqrt int rand srand * gsub index length match split sprintf sub * substr tolower toupper close system */ %token GETLINE /* Syntactically different from other built-ins. */ /* Two-character tokens. */ %token ADD_ASSIGN SUB_ASSIGN MUL_ASSIGN DIV_ASSIGN MOD_ASSIGN POW_ASSIGN /* '+=' '-=' '*=' '/=' '%=' '^=' */ %token OR AND NO_MATCH EQ LE GE NE INCR DECR APPEND /* '||' '&&' '!~' '==' '<=' '>=' '!=' '++' '--' '>>' */ /* One-character tokens. */ %token '{' '}' '(' ')' '[' ']' ',' ';' NEWLINE %token '+' '-' '*' '%' '^' '!' '>' '<' '|' '?' ':' '~' '$' '=' %start program %% program : item_list | item_list item ; item_list : /* empty */ | item_list item terminator ; item : action | pattern action | normal_pattern | Function NAME '(' param_list_opt ')' newline_opt action | Function FUNC_NAME '(' param_list_opt ')' newline_opt action ; param_list_opt : /* empty */ | param_list ; param_list : NAME | param_list ',' NAME ; pattern : normal_pattern | special_pattern ; normal_pattern : expr | expr ',' newline_opt expr ; special_pattern : Begin | End ; action : '{' newline_opt '}' | '{' newline_opt terminated_statement_list '}' | '{' newline_opt unterminated_statement_list '}' ; terminator : terminator NEWLINE | ';' | NEWLINE ; terminated_statement_list : terminated_statement | terminated_statement_list terminated_statement ; unterminated_statement_list : unterminated_statement | terminated_statement_list unterminated_statement ; terminated_statement : action newline_opt | If '(' expr ')' newline_opt terminated_statement | If '(' expr ')' newline_opt terminated_statement Else newline_opt terminated_statement | While '(' expr ')' newline_opt terminated_statement | For '(' simple_statement_opt ';' expr_opt ';' simple_statement_opt ')' newline_opt terminated_statement | For '(' NAME In NAME ')' newline_opt terminated_statement | ';' newline_opt | terminatable_statement NEWLINE newline_opt | terminatable_statement ';' newline_opt ; unterminated_statement : terminatable_statement | If '(' expr ')' newline_opt unterminated_statement | If '(' expr ')' newline_opt terminated_statement Else newline_opt unterminated_statement | While '(' expr ')' newline_opt unterminated_statement | For '(' simple_statement_opt ';' expr_opt ';' simple_statement_opt ')' newline_opt unterminated_statement | For '(' NAME In NAME ')' newline_opt unterminated_statement ; terminatable_statement : simple_statement | Break | Continue | Next | Exit expr_opt | Return expr_opt | Do newline_opt terminated_statement While '(' expr ')' ; simple_statement_opt : /* empty */ | simple_statement ; simple_statement : Delete NAME '[' expr_list ']' | expr | print_statement ; print_statement : simple_print_statement | simple_print_statement output_redirection ; simple_print_statement : Print print_expr_list_opt | Print '(' multiple_expr_list ')' | Printf print_expr_list | Printf '(' multiple_expr_list ')' ; output_redirection : '>' expr | APPEND expr | '|' expr ; expr_list_opt : /* empty */ | expr_list ; expr_list : expr | multiple_expr_list ; multiple_expr_list : expr ',' newline_opt expr | multiple_expr_list ',' newline_opt expr ; expr_opt : /* empty */ | expr ; expr : unary_expr | non_unary_expr ; unary_expr : '+' expr | '-' expr | unary_expr '^' expr | unary_expr '*' expr | unary_expr '/' expr | unary_expr '%' expr | unary_expr '+' expr | unary_expr '-' expr | unary_expr non_unary_expr | unary_expr '<' expr | unary_expr LE expr | unary_expr NE expr | unary_expr EQ expr | unary_expr '>' expr | unary_expr GE expr | unary_expr '~' expr | unary_expr NO_MATCH expr | unary_expr In NAME | unary_expr AND newline_opt expr | unary_expr OR newline_opt expr | unary_expr '?' expr ':' expr | unary_input_function ; non_unary_expr : '(' expr ')' | '!' expr | non_unary_expr '^' expr | non_unary_expr '*' expr | non_unary_expr '/' expr | non_unary_expr '%' expr | non_unary_expr '+' expr | non_unary_expr '-' expr | non_unary_expr non_unary_expr | non_unary_expr '<' expr | non_unary_expr LE expr | non_unary_expr NE expr | non_unary_expr EQ expr | non_unary_expr '>' expr | non_unary_expr GE expr | non_unary_expr '~' expr | non_unary_expr NO_MATCH expr | non_unary_expr In NAME | '(' multiple_expr_list ')' In NAME | non_unary_expr AND newline_opt expr | non_unary_expr OR newline_opt expr | non_unary_expr '?' expr ':' expr | NUMBER | STRING | lvalue | ERE | lvalue INCR | lvalue DECR | INCR lvalue | DECR lvalue | lvalue POW_ASSIGN expr | lvalue MOD_ASSIGN expr | lvalue MUL_ASSIGN expr | lvalue DIV_ASSIGN expr | lvalue ADD_ASSIGN expr | lvalue SUB_ASSIGN expr | lvalue '=' expr | FUNC_NAME '(' expr_list_opt ')' /* no white space allowed before '(' */ | BUILTIN_FUNC_NAME '(' expr_list_opt ')' | BUILTIN_FUNC_NAME | non_unary_input_function ; print_expr_list_opt : /* empty */ | print_expr_list ; print_expr_list : print_expr | print_expr_list ',' newline_opt print_expr ; print_expr : unary_print_expr | non_unary_print_expr ; unary_print_expr : '+' print_expr | '-' print_expr | unary_print_expr '^' print_expr | unary_print_expr '*' print_expr | unary_print_expr '/' print_expr | unary_print_expr '%' print_expr | unary_print_expr '+' print_expr | unary_print_expr '-' print_expr | unary_print_expr non_unary_print_expr | unary_print_expr '~' print_expr | unary_print_expr NO_MATCH print_expr | unary_print_expr In NAME | unary_print_expr AND newline_opt print_expr | unary_print_expr OR newline_opt print_expr | unary_print_expr '?' print_expr ':' print_expr ; non_unary_print_expr : '(' expr ')' | '!' print_expr | non_unary_print_expr '^' print_expr | non_unary_print_expr '*' print_expr | non_unary_print_expr '/' print_expr | non_unary_print_expr '%' print_expr | non_unary_print_expr '+' print_expr | non_unary_print_expr '-' print_expr | non_unary_print_expr non_unary_print_expr | non_unary_print_expr '~' print_expr | non_unary_print_expr NO_MATCH print_expr | non_unary_print_expr In NAME | '(' multiple_expr_list ')' In NAME | non_unary_print_expr AND newline_opt print_expr | non_unary_print_expr OR newline_opt print_expr | non_unary_print_expr '?' print_expr ':' print_expr | NUMBER | STRING | lvalue | ERE | lvalue INCR | lvalue DECR | INCR lvalue | DECR lvalue | lvalue POW_ASSIGN print_expr | lvalue MOD_ASSIGN print_expr | lvalue MUL_ASSIGN print_expr | lvalue DIV_ASSIGN print_expr | lvalue ADD_ASSIGN print_expr | lvalue SUB_ASSIGN print_expr | lvalue '=' print_expr | FUNC_NAME '(' expr_list_opt ')' /* no white space allowed before '(' */ | BUILTIN_FUNC_NAME '(' expr_list_opt ')' | BUILTIN_FUNC_NAME ; lvalue : NAME | NAME '[' expr_list ']' | '$' expr ; non_unary_input_function : simple_get | simple_get '<' expr | non_unary_expr '|' simple_get ; unary_input_function : unary_expr '|' simple_get ; simple_get : GETLINE | GETLINE lvalue ; newline_opt : /* empty */ | newline_opt NEWLINE ; This grammar has several ambiguities that shall be resolved as follows: * Operator precedence and associativity shall be as described in Table 4-1, Expressions in Decreasing Precedence in awk. * In case of ambiguity, an else shall be associated with the most immediately preceding if that would satisfy the grammar. * In some contexts, a <slash> ('/') that is used to surround an ERE could also be the division operator. This shall be resolved in such a way that wherever the division operator could appear, a <slash> is assumed to be the division operator. (There is no unary division operator.) Each expression in an awk program shall conform to the precedence and associativity rules, even when this is not needed to resolve an ambiguity. For example, because '$' has higher precedence than '++', the string "$x++--" is not a valid awk expression, even though it is unambiguously parsed by the grammar as "$(x++)--". One convention that might not be obvious from the formal grammar is where <newline> characters are acceptable. There are several obvious placements such as terminating a statement, and a <backslash> can be used to escape <newline> characters between any lexical tokens. In addition, <newline> characters without <backslash> characters can follow a comma, an open brace, logical AND operator ("&&"), logical OR operator ("||"), the do keyword, the else keyword, and the closing parenthesis of an if, for, or while statement. For example: { print $1, $2 } Lexical Conventions The lexical conventions for awk programs, with respect to the preceding grammar, shall be as follows: 1. Except as noted, awk shall recognize the longest possible token or delimiter beginning at a given point. 2. A comment shall consist of any characters beginning with the <number-sign> character and terminated by, but excluding the next occurrence of, a <newline>. Comments shall have no effect, except to delimit lexical tokens. 3. The <newline> shall be recognized as the token NEWLINE. 4. A <backslash> character immediately followed by a <newline> shall have no effect. 5. The token STRING shall represent a string constant. A string constant shall begin with the character '"'. Within a string constant, a <backslash> character shall be considered to begin an escape sequence as specified in the table in the Base Definitions volume of POSIX.12017, Chapter 5, File Format Notation ('\\', '\a', '\b', '\f', '\n', '\r', '\t', '\v'). In addition, the escape sequences in Table 4-2, Escape Sequences in awk shall be recognized. A <newline> shall not occur within a string constant. A string constant shall be terminated by the first unescaped occurrence of the character '"' after the one that begins the string constant. The value of the string shall be the sequence of all unescaped characters and values of escape sequences between, but not including, the two delimiting '"' characters. 6. The token ERE represents an extended regular expression constant. An ERE constant shall begin with the <slash> character. Within an ERE constant, a <backslash> character shall be considered to begin an escape sequence as specified in the table in the Base Definitions volume of POSIX.12017, Chapter 5, File Format Notation. In addition, the escape sequences in Table 4-2, Escape Sequences in awk shall be recognized. The application shall ensure that a <newline> does not occur within an ERE constant. An ERE constant shall be terminated by the first unescaped occurrence of the <slash> character after the one that begins the ERE constant. The extended regular expression represented by the ERE constant shall be the sequence of all unescaped characters and values of escape sequences between, but not including, the two delimiting <slash> characters. 7. A <blank> shall have no effect, except to delimit lexical tokens or within STRING or ERE tokens. 8. The token NUMBER shall represent a numeric constant. Its form and numeric value shall either be equivalent to the decimal- floating-constant token as specified by the ISO C standard, or it shall be a sequence of decimal digits and shall be evaluated as an integer constant in decimal. In addition, implementations may accept numeric constants with the form and numeric value equivalent to the hexadecimal-constant and hexadecimal-floating-constant tokens as specified by the ISO C standard. If the value is too large or too small to be representable (see Section 1.1.2, Concepts Derived from the ISO C Standard), the behavior is undefined. 9. A sequence of underscores, digits, and alphabetics from the portable character set (see the Base Definitions volume of POSIX.12017, Section 6.1, Portable Character Set), beginning with an <underscore> or alphabetic character, shall be considered a word. 10. The following words are keywords that shall be recognized as individual tokens; the name of the token is the same as the keyword: BEGIN delete END function in printf break do exit getline next return continue else for if print while 11. The following words are names of built-in functions and shall be recognized as the token BUILTIN_FUNC_NAME: atan2 gsub log split sub toupper close index match sprintf substr cos int rand sqrt system exp length sin srand tolower The above-listed keywords and names of built-in functions are considered reserved words. 12. The token NAME shall consist of a word that is not a keyword or a name of a built-in function and is not followed immediately (without any delimiters) by the '(' character. 13. The token FUNC_NAME shall consist of a word that is not a keyword or a name of a built-in function, followed immediately (without any delimiters) by the '(' character. The '(' character shall not be included as part of the token. 14. The following two-character sequences shall be recognized as the named tokens: Token Name Sequence Token Name Sequence ADD_ASSIGN += NO_MATCH !~ SUB_ASSIGN -= EQ == MUL_ASSIGN *= LE <= DIV_ASSIGN /= GE >= MOD_ASSIGN %= NE != POW_ASSIGN ^= INCR ++ OR || DECR -- AND && APPEND >> 15. The following single characters shall be recognized as tokens whose names are the character: <newline> { } ( ) [ ] , ; + - * % ^ ! > < | ? : ~ $ = There is a lexical ambiguity between the token ERE and the tokens '/' and DIV_ASSIGN. When an input sequence begins with a <slash> character in any syntactic context where the token '/' or DIV_ASSIGN could appear as the next token in a valid program, the longer of those two tokens that can be recognized shall be recognized. In any other syntactic context where the token ERE could appear as the next token in a valid program, the token ERE shall be recognized. EXIT STATUS top The following exit values shall be returned: 0 All input files were processed successfully. >0 An error occurred. The exit status can be altered within the program by using an exit expression. CONSEQUENCES OF ERRORS top If any file operand is specified and the named file cannot be accessed, awk shall write a diagnostic message to standard error and terminate without any further action. If the program specified by either the program operand or a progfile operand is not a valid awk program (as specified in the EXTENDED DESCRIPTION section), the behavior is undefined. The following sections are informative. APPLICATION USAGE top The index, length, match, and substr functions should not be confused with similar functions in the ISO C standard; the awk versions deal with characters, while the ISO C standard deals with bytes. Because the concatenation operation is represented by adjacent expressions rather than an explicit operator, it is often necessary to use parentheses to enforce the proper evaluation precedence. When using awk to process pathnames, it is recommended that LC_ALL, or at least LC_CTYPE and LC_COLLATE, are set to POSIX or C in the environment, since pathnames can contain byte sequences that do not form valid characters in some locales, in which case the utility's behavior would be undefined. In the POSIX locale each byte is a valid single-byte character, and therefore this problem is avoided. On implementations where the "==" operator checks if strings collate equally, applications needing to check whether strings are identical can use: length(a) == length(b) && index(a,b) == 1 On implementations where the "==" operator checks if strings are identical, applications needing to check whether strings collate equally can use: a <= b && a >= b EXAMPLES top The awk program specified in the command line is most easily specified within single-quotes (for example, 'program') for applications using sh, because awk programs commonly contain characters that are special to the shell, including double- quotes. In the cases where an awk program contains single-quote characters, it is usually easiest to specify most of the program as strings within single-quotes concatenated by the shell with quoted single-quote characters. For example: awk '/'\''/ { print "quote:", $0 }' prints all lines from the standard input containing a single- quote character, prefixed with quote:. The following are examples of simple awk programs: 1. Write to the standard output all input lines for which field 3 is greater than 5: $3 > 5 2. Write every tenth line: (NR % 10) == 0 3. Write any line with a substring matching the regular expression: /(G|D)(2[0-9][[:alpha:]]*)/ 4. Print any line with a substring containing a 'G' or 'D', followed by a sequence of digits and characters. This example uses character classes digit and alpha to match language- independent digit and alphabetic characters respectively: /(G|D)([[:digit:][:alpha:]]*)/ 5. Write any line in which the second field matches the regular expression and the fourth field does not: $2 ~ /xyz/ && $4 !~ /xyz/ 6. Write any line in which the second field contains a <backslash>: $2 ~ /\\/ 7. Write any line in which the second field contains a <backslash>. Note that <backslash>-escapes are interpreted twice; once in lexical processing of the string and once in processing the regular expression: $2 ~ "\\\\" 8. Write the second to the last and the last field in each line. Separate the fields by a <colon>: {OFS=":";print $(NF-1), $NF} 9. Write the line number and number of fields in each line. The three strings representing the line number, the <colon>, and the number of fields are concatenated and that string is written to standard output: {print NR ":" NF} 10. Write lines longer than 72 characters: length($0) > 72 11. Write the first two fields in opposite order separated by OFS: { print $2, $1 } 12. Same, with input fields separated by a <comma> or <space> and <tab> characters, or both: BEGIN { FS = ",[ \t]*|[ \t]+" } { print $2, $1 } 13. Add up the first column, print sum, and average: {s += $1 } END {print "sum is ", s, " average is", s/NR} 14. Write fields in reverse order, one per line (many lines out for each line in): { for (i = NF; i > 0; --i) print $i } 15. Write all lines between occurrences of the strings start and stop: /start/, /stop/ 16. Write all lines whose first field is different from the previous one: $1 != prev { print; prev = $1 } 17. Simulate echo: BEGIN { for (i = 1; i < ARGC; ++i) printf("%s%s", ARGV[i], i==ARGC-1?"\n":" ") } 18. Write the path prefixes contained in the PATH environment variable, one per line: BEGIN { n = split (ENVIRON["PATH"], path, ":") for (i = 1; i <= n; ++i) print path[i] } 19. If there is a file named input containing page headers of the form: Page # and a file named program that contains: /Page/ { $2 = n++; } { print } then the command line: awk -f program n=5 input prints the file input, filling in page numbers starting at 5. RATIONALE top This description is based on the new awk, ``nawk'', (see the referenced The AWK Programming Language), which introduced a number of new features to the historical awk: 1. New keywords: delete, do, function, return 2. New built-in functions: atan2, close, cos, gsub, match, rand, sin, srand, sub, system 3. New predefined variables: FNR, ARGC, ARGV, RSTART, RLENGTH, SUBSEP 4. New expression operators: ?, :, ,, ^ 5. The FS variable and the third argument to split, now treated as extended regular expressions. 6. The operator precedence, changed to more closely match the C language. Two examples of code that operate differently are: while ( n /= 10 > 1) ... if (!"wk" ~ /bwk/) ... Several features have been added based on newer implementations of awk: * Multiple instances of -f progfile are permitted. * The new option -v assignment. * The new predefined variable ENVIRON. * New built-in functions toupper and tolower. * More formatting capabilities are added to printf to match the ISO C standard. Earlier versions of this standard required implementations to support multiple adjacent <semicolon>s, lines with one or more <semicolon> before a rule (pattern-action pairs), and lines with only <semicolon>(s). These are not required by this standard and are considered poor programming practice, but can be accepted by an implementation of awk as an extension. The overall awk syntax has always been based on the C language, with a few features from the shell command language and other sources. Because of this, it is not completely compatible with any other language, which has caused confusion for some users. It is not the intent of the standard developers to address such issues. A few relatively minor changes toward making the language more compatible with the ISO C standard were made; most of these changes are based on similar changes in recent implementations, as described above. There remain several C-language conventions that are not in awk. One of the notable ones is the <comma> operator, which is commonly used to specify multiple expressions in the C language for statement. Also, there are various places where awk is more restrictive than the C language regarding the type of expression that can be used in a given context. These limitations are due to the different features that the awk language does provide. Regular expressions in awk have been extended somewhat from historical implementations to make them a pure superset of extended regular expressions, as defined by POSIX.12008 (see the Base Definitions volume of POSIX.12017, Section 9.4, Extended Regular Expressions). The main extensions are internationalization features and interval expressions. Historical implementations of awk have long supported <backslash>-escape sequences as an extension to extended regular expressions, and this extension has been retained despite inconsistency with other utilities. The number of escape sequences recognized in both extended regular expressions and strings has varied (generally increasing with time) among implementations. The set specified by POSIX.12008 includes most sequences known to be supported by popular implementations and by the ISO C standard. One sequence that is not supported is hexadecimal value escapes beginning with '\x'. This would allow values expressed in more than 9 bits to be used within awk as in the ISO C standard. However, because this syntax has a non- deterministic length, it does not permit the subsequent character to be a hexadecimal digit. This limitation can be dealt with in the C language by the use of lexical string concatenation. In the awk language, concatenation could also be a solution for strings, but not for extended regular expressions (either lexical ERE tokens or strings used dynamically as regular expressions). Because of this limitation, the feature has not been added to POSIX.12008. When a string variable is used in a context where an extended regular expression normally appears (where the lexical token ERE is used in the grammar) the string does not contain the literal <slash> characters. Some versions of awk allow the form: func name(args, ... ) { statements } This has been deprecated by the authors of the language, who asked that it not be specified. Historical implementations of awk produce an error if a next statement is executed in a BEGIN action, and cause awk to terminate if a next statement is executed in an END action. This behavior has not been documented, and it was not believed that it was necessary to standardize it. The specification of conversions between string and numeric values is much more detailed than in the documentation of historical implementations or in the referenced The AWK Programming Language. Although most of the behavior is designed to be intuitive, the details are necessary to ensure compatible behavior from different implementations. This is especially important in relational expressions since the types of the operands determine whether a string or numeric comparison is performed. From the perspective of an application developer, it is usually sufficient to expect intuitive behavior and to force conversions (by adding zero or concatenating a null string) when the type of an expression does not obviously match what is needed. The intent has been to specify historical practice in almost all cases. The one exception is that, in historical implementations, variables and constants maintain both string and numeric values after their original value is converted by any use. This means that referencing a variable or constant can have unexpected side-effects. For example, with historical implementations the following program: { a = "+2" b = 2 if (NR % 2) c = a + b if (a == b) print "numeric comparison" else print "string comparison" } would perform a numeric comparison (and output numeric comparison) for each odd-numbered line, but perform a string comparison (and output string comparison) for each even-numbered line. POSIX.12008 ensures that comparisons will be numeric if necessary. With historical implementations, the following program: BEGIN { OFMT = "%e" print 3.14 OFMT = "%f" print 3.14 } would output "3.140000e+00" twice, because in the second print statement the constant "3.14" would have a string value from the previous conversion. POSIX.12008 requires that the output of the second print statement be "3.140000". The behavior of historical implementations was seen as too unintuitive and unpredictable. It was pointed out that with the rules contained in early drafts, the following script would print nothing: BEGIN { y[1.5] = 1 OFMT = "%e" print y[1.5] } Therefore, a new variable, CONVFMT, was introduced. The OFMT variable is now restricted to affecting output conversions of numbers to strings and CONVFMT is used for internal conversions, such as comparisons or array indexing. The default value is the same as that for OFMT, so unless a program changes CONVFMT (which no historical program would do), it will receive the historical behavior associated with internal string conversions. The POSIX awk lexical and syntactic conventions are specified more formally than in other sources. Again the intent has been to specify historical practice. One convention that may not be obvious from the formal grammar as in other verbal descriptions is where <newline> characters are acceptable. There are several obvious placements such as terminating a statement, and a <backslash> can be used to escape <newline> characters between any lexical tokens. In addition, <newline> characters without <backslash> characters can follow a comma, an open brace, a logical AND operator ("&&"), a logical OR operator ("||"), the do keyword, the else keyword, and the closing parenthesis of an if, for, or while statement. For example: { print $1, $2 } The requirement that awk add a trailing <newline> to the program argument text is to simplify the grammar, making it match a text file in form. There is no way for an application or test suite to determine whether a literal <newline> is added or whether awk simply acts as if it did. POSIX.12008 requires several changes from historical implementations in order to support internationalization. Probably the most subtle of these is the use of the decimal-point character, defined by the LC_NUMERIC category of the locale, in representations of floating-point numbers. This locale-specific character is used in recognizing numeric input, in converting between strings and numeric values, and in formatting output. However, regardless of locale, the <period> character (the decimal-point character of the POSIX locale) is the decimal-point character recognized in processing awk programs (including assignments in command line arguments). This is essentially the same convention as the one used in the ISO C standard. The difference is that the C language includes the setlocale() function, which permits an application to modify its locale. Because of this capability, a C application begins executing with its locale set to the C locale, and only executes in the environment-specified locale after an explicit call to setlocale(). However, adding such an elaborate new feature to the awk language was seen as inappropriate for POSIX.12008. It is possible to execute an awk program explicitly in any desired locale by setting the environment in the shell. The undefined behavior resulting from NULs in extended regular expressions allows future extensions for the GNU gawk program to process binary data. The behavior in the case of invalid awk programs (including lexical, syntactic, and semantic errors) is undefined because it was considered overly limiting on implementations to specify. In most cases such errors can be expected to produce a diagnostic and a non-zero exit status. However, some implementations may choose to extend the language in ways that make use of certain invalid constructs. Other invalid constructs might be deemed worthy of a warning, but otherwise cause some reasonable behavior. Still other constructs may be very difficult to detect in some implementations. Also, different implementations might detect a given error during an initial parsing of the program (before reading any input files) while others might detect it when executing the program after reading some input. Implementors should be aware that diagnosing errors as early as possible and producing useful diagnostics can ease debugging of applications, and thus make an implementation more usable. The unspecified behavior from using multi-character RS values is to allow possible future extensions based on extended regular expressions used for record separators. Historical implementations take the first character of the string and ignore the others. Unspecified behavior when split(string,array,<null>) is used is to allow a proposed future extension that would split up a string into an array of individual characters. In the context of the getline function, equally good arguments for different precedences of the | and < operators can be made. Historical practice has been that: getline < "a" "b" is parsed as: ( getline < "a" ) "b" although many would argue that the intent was that the file ab should be read. However: getline < "x" + 1 parses as: getline < ( "x" + 1 ) Similar problems occur with the | version of getline, particularly in combination with $. For example: $"echo hi" | getline (This situation is particularly problematic when used in a print statement, where the |getline part might be a redirection of the print.) Since in most cases such constructs are not (or at least should not) be used (because they have a natural ambiguity for which there is no conventional parsing), the meaning of these constructs has been made explicitly unspecified. (The effect is that a conforming application that runs into the problem must parenthesize to resolve the ambiguity.) There appeared to be few if any actual uses of such constructs. Grammars can be written that would cause an error under these circumstances. Where backwards-compatibility is not a large consideration, implementors may wish to use such grammars. Some historical implementations have allowed some built-in functions to be called without an argument list, the result being a default argument list chosen in some ``reasonable'' way. Use of length as a synonym for length($0) is the only one of these forms that is thought to be widely known or widely used; this particular form is documented in various places (for example, most historical awk reference pages, although not in the referenced The AWK Programming Language) as legitimate practice. With this exception, default argument lists have always been undocumented and vaguely defined, and it is not at all clear how (or if) they should be generalized to user-defined functions. They add no useful functionality and preclude possible future extensions that might need to name functions without calling them. Not standardizing them seems the simplest course. The standard developers considered that length merited special treatment, however, since it has been documented in the past and sees possibly substantial use in historical programs. Accordingly, this usage has been made legitimate, but Issue 5 removed the obsolescent marking for XSI-conforming implementations and many otherwise conforming applications depend on this feature. In sub and gsub, if repl is a string literal (the lexical token STRING), then two consecutive <backslash> characters should be used in the string to ensure a single <backslash> will precede the <ampersand> when the resultant string is passed to the function. (For example, to specify one literal <ampersand> in the replacement string, use gsub(ERE, "\\&").) Historically, the only special character in the repl argument of sub and gsub string functions was the <ampersand> ('&') character and preceding it with the <backslash> character was used to turn off its special meaning. The description in the ISO POSIX2:1993 standard introduced behavior such that the <backslash> character was another special character and it was unspecified whether there were any other special characters. This description introduced several portability problems, some of which are described below, and so it has been replaced with the more historical description. Some of the problems include: * Historically, to create the replacement string, a script could use gsub(ERE, "\\&"), but with the ISO POSIX2:1993 standard wording, it was necessary to use gsub(ERE, "\\\\&"). The <backslash> characters are doubled here because all string literals are subject to lexical analysis, which would reduce each pair of <backslash> characters to a single <backslash> before being passed to gsub. * Since it was unspecified what the special characters were, for portable scripts to guarantee that characters are printed literally, each character had to be preceded with a <backslash>. (For example, a portable script had to use gsub(ERE, "\\h\\i") to produce a replacement string of "hi".) The description for comparisons in the ISO POSIX2:1993 standard did not properly describe historical practice because of the way numeric strings are compared as numbers. The current rules cause the following code: if (0 == "000") print "strange, but true" else print "not true" to do a numeric comparison, causing the if to succeed. It should be intuitively obvious that this is incorrect behavior, and indeed, no historical implementation of awk actually behaves this way. To fix this problem, the definition of numeric string was enhanced to include only those values obtained from specific circumstances (mostly external sources) where it is not possible to determine unambiguously whether the value is intended to be a string or a numeric. Variables that are assigned to a numeric string shall also be treated as a numeric string. (For example, the notion of a numeric string can be propagated across assignments.) In comparisons, all variables having the uninitialized value are to be treated as a numeric operand evaluating to the numeric value zero. Uninitialized variables include all types of variables including scalars, array elements, and fields. The definition of an uninitialized value in Variables and Special Variables is necessary to describe the value placed on uninitialized variables and on fields that are valid (for example, < $NF) but have no characters in them and to describe how these variables are to be used in comparisons. A valid field, such as $1, that has no characters in it can be obtained from an input line of "\t\t" when FS='\t'. Historically, the comparison ($1<10) was done numerically after evaluating $1 to the value zero. The phrase ``... also shall have the numeric value of the numeric string'' was removed from several sections of the ISO POSIX2:1993 standard because is specifies an unnecessary implementation detail. It is not necessary for POSIX.12008 to specify that these objects be assigned two different values. It is only necessary to specify that these objects may evaluate to two different values depending on context. Historical implementations of awk did not parse hexadecimal integer or floating constants like "0xa" and "0xap0". Due to an oversight, the 2001 through 2004 editions of this standard required support for hexadecimal floating constants. This was due to the reference to atof(). This version of the standard allows but does not require implementations to use atof() and includes a description of how floating-point numbers are recognized as an alternative to match historic behavior. The intent of this change is to allow implementations to recognize floating-point constants according to either the ISO/IEC 9899:1990 standard or ISO/IEC 9899:1999 standard, and to allow (but not require) implementations to recognize hexadecimal integer constants. Historical implementations of awk did not support floating-point infinities and NaNs in numeric strings; e.g., "-INF" and "NaN". However, implementations that use the atof() or strtod() functions to do the conversion picked up support for these values if they used a ISO/IEC 9899:1999 standard version of the function instead of a ISO/IEC 9899:1990 standard version. Due to an oversight, the 2001 through 2004 editions of this standard did not allow support for infinities and NaNs, but in this revision support is allowed (but not required). This is a silent change to the behavior of awk programs; for example, in the POSIX locale the expression: ("-INF" + 0 < 0) formerly had the value 0 because "-INF" converted to 0, but now it may have the value 0 or 1. FUTURE DIRECTIONS top A future version of this standard may require the "!=" and "==" operators to perform string comparisons by checking if the strings are identical (and not by checking if they collate equally). SEE ALSO top Section 1.3, Grammar Conventions, grep(1p), lex(1p), sed(1p) The Base Definitions volume of POSIX.12017, Chapter 5, File Format Notation, Section 6.1, Portable Character Set, Chapter 8, Environment Variables, Chapter 9, Regular Expressions, Section 12.2, Utility Syntax Guidelines The System Interfaces volume of POSIX.12017, atof(3p), exec(1p), isspace(3p), popen(3p), setlocale(3p), strtod(3p) COPYRIGHT top Portions of this text are reprinted and reproduced in electronic form from IEEE Std 1003.1-2017, Standard for Information Technology -- Portable Operating System Interface (POSIX), The Open Group Base Specifications Issue 7, 2018 Edition, Copyright (C) 2018 by the Institute of Electrical and Electronics Engineers, Inc and The Open Group. In the event of any discrepancy between this version and the original IEEE and The Open Group Standard, the original IEEE and The Open Group Standard is the referee document. The original Standard can be obtained online at http://www.opengroup.org/unix/online.html . Any typographical or formatting errors that appear in this page are most likely to have been introduced during the conversion of the source files to man page format. To report such errors, see https://www.kernel.org/doc/man-pages/reporting_bugs.html . IEEE/The Open Group 2017 AWK(1P) Pages that refer to this page: bc(1p), colrm(1), join(1p), printf(1p), sed(1p) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # awk\n\n> A versatile programming language for working on files.\n> More information: <https://github.com/onetrueawk/awk>.\n\n- Print the fifth column (a.k.a. field) in a space-separated file:\n\n`awk '{print $5}' {{path/to/file}}`\n\n- Print the second column of the lines containing "foo" in a space-separated file:\n\n`awk '/{{foo}}/ {print $2}' {{path/to/file}}`\n\n- Print the last column of each line in a file, using a comma (instead of space) as a field separator:\n\n`awk -F ',' '{print $NF}' {{path/to/file}}`\n\n- Sum the values in the first column of a file and print the total:\n\n`awk '{s+=$1} END {print s}' {{path/to/file}}`\n\n- Print every third line starting from the first line:\n\n`awk 'NR%3==1' {{path/to/file}}`\n\n- Print different values based on conditions:\n\n`awk '{if ($1 == "foo") print "Exact match foo"; else if ($1 ~ "bar") print "Partial match bar"; else print "Baz"}' {{path/to/file}}`\n\n- Print all lines where the 10th column value equals the specified value:\n\n`awk '($10 == {{value}})'`\n\n- Print all the lines which the 10th column value is between a min and a max:\n\n`awk '($10 >= {{min_value}} && $10 <= {{max_value}})'`\n |
b2sum | b2sum(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training b2sum(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | AUTHOR | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON B2SUM(1) User Commands B2SUM(1) NAME top b2sum - compute and check BLAKE2 message digest SYNOPSIS top b2sum [OPTION]... [FILE]... DESCRIPTION top Print or check BLAKE2b (512-bit) checksums. With no FILE, or when FILE is -, read standard input. Mandatory arguments to long options are mandatory for short options too. -b, --binary read in binary mode -c, --check read checksums from the FILEs and check them -l, --length=BITS digest length in bits; must not exceed the max for the blake2 algorithm and must be a multiple of 8 --tag create a BSD-style checksum -t, --text read in text mode (default) -z, --zero end each output line with NUL, not newline, and disable file name escaping The following five options are useful only when verifying checksums: --ignore-missing don't fail or report status for missing files --quiet don't print OK for each successfully verified file --status don't output anything, status code shows success --strict exit non-zero for improperly formatted checksum lines -w, --warn warn about improperly formatted checksum lines --help display this help and exit --version output version information and exit The sums are computed as described in RFC 7693. When checking, the input should be a former output of this program. The default mode is to print a line with: checksum, a space, a character indicating input mode ('*' for binary, ' ' for text or where binary is insignificant), and name for each FILE. Note: There is no difference between binary mode and text mode on GNU systems. AUTHOR top Written by Padraig Brady and Samuel Neves. REPORTING BUGS top GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT top Copyright 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top cksum(1) Full documentation <https://www.gnu.org/software/coreutils/b2sum> or available locally via: info '(coreutils) b2sum invocation' COLOPHON top This page is part of the coreutils (basic file, shell and text manipulation utilities) project. Information about the project can be found at http://www.gnu.org/software/coreutils/. If you have a bug report for this manual page, see http://www.gnu.org/software/coreutils/. This page was obtained from the tarball coreutils-9.4.tar.xz fetched from http://ftp.gnu.org/gnu/coreutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU coreutils 9.4 August 2023 B2SUM(1) Pages that refer to this page: md5sum(1), sha1sum(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # b2sum\n\n> Calculate BLAKE2 cryptographic checksums.\n> More information: <https://www.gnu.org/software/coreutils/b2sum>.\n\n- Calculate the BLAKE2 checksum for one or more files:\n\n`b2sum {{path/to/file1 path/to/file2 ...}}`\n\n- Calculate and save the list of BLAKE2 checksums to a file:\n\n`b2sum {{path/to/file1 path/to/file2 ...}} > {{path/to/file.b2}}`\n\n- Calculate a BLAKE2 checksum from `stdin`:\n\n`{{command}} | b2sum`\n\n- Read a file of BLAKE2 sums and filenames and verify all files have matching checksums:\n\n`b2sum --check {{path/to/file.b2}}`\n\n- Only show a message for missing files or when verification fails:\n\n`b2sum --check --quiet {{path/to/file.b2}}`\n\n- Only show a message when verification fails, ignoring missing files:\n\n`b2sum --ignore-missing --check --quiet {{path/to/file.b2}}`\n |
badblocks | badblocks(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training badblocks(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | WARNING | AUTHOR | AVAILABILITY | SEE ALSO | COLOPHON BADBLOCKS(8) System Manager's Manual BADBLOCKS(8) NAME top badblocks - search a device for bad blocks SYNOPSIS top badblocks [ -svwnfBX ] [ -b block_size ] [ -c blocks_at_once ] [ -d read_delay_factor ] [ -e max_bad_blocks ] [ -i input_file ] [ -o output_file ] [ -p num_passes ] [ -t test_pattern ] device [ last_block ] [ first_block ] DESCRIPTION top badblocks is used to search for bad blocks on a device (usually a disk partition). device is the special file corresponding to the device (e.g /dev/hdc1). last_block is the last block to be checked; if it is not specified, the last block on the device is used as a default. first_block is an optional parameter specifying the starting block number for the test, which allows the testing to start in the middle of the disk. If it is not specified the first block on the disk is used as a default. Important note: If the output of badblocks is going to be fed to the e2fsck or mke2fs programs, it is important that the block size is properly specified, since the block numbers which are generated are very dependent on the block size in use by the file system. For this reason, it is strongly recommended that users not run badblocks directly, but rather use the -c option of the e2fsck and mke2fs programs. OPTIONS top -b block_size Specify the size of blocks in bytes. The default is 1024. -c number of blocks is the number of blocks which are tested at a time. The default is 64. -d read delay factor This parameter, if passed and non-zero, will cause bad blocks to sleep between reads if there were no errors encountered in the read operation; the delay will be calculated as a percentage of the time it took for the read operation to be performed. In other words, a value of 100 will cause each read to be delayed by the amount the previous read took, and a value of 200 by twice the amount. -e max bad block count Specify a maximum number of bad blocks before aborting the test. The default is 0, meaning the test will continue until the end of the test range is reached. -f Normally, badblocks will refuse to do a read/write or a non-destructive test on a device which is mounted, since either can cause the system to potentially crash and/or damage the file system even if it is mounted read-only. This can be overridden using the -f flag, but should almost never be used --- if you think you're smarter than the badblocks program, you almost certainly aren't. The only time when this option might be safe to use is if the /etc/mtab file is incorrect, and the device really isn't mounted. -i input_file Read a list of already existing known bad blocks. Badblocks will skip testing these blocks since they are known to be bad. If input_file is specified as "-", the list will be read from the standard input. Blocks listed in this list will be omitted from the list of new bad blocks produced on the standard output or in the output file. The -b option of dumpe2fs(8) can be used to retrieve the list of blocks currently marked bad on an existing file system, in a format suitable for use with this option. -n Use non-destructive read-write mode. By default only a non-destructive read-only test is done. This option must not be combined with the -w option, as they are mutually exclusive. -o output_file Write the list of bad blocks to the specified file. Without this option, badblocks displays the list on its standard output. The format of this file is suitable for use by the -l option in e2fsck(8) or mke2fs(8). -p num_passes Repeat scanning the disk until there are no new blocks discovered in num_passes consecutive scans of the disk. Default is 0, meaning badblocks will exit after the first pass. -s Show the progress of the scan by writing out rough percentage completion of the current badblocks pass over the disk. Note that badblocks may do multiple test passes over the disk, in particular if the -p or -w option is requested by the user. -t test_pattern Specify a test pattern to be read (and written) to disk blocks. The test_pattern may either be a numeric value between 0 and ULONG_MAX-1 inclusive, or the word "random", which specifies that the block should be filled with a random bit pattern. For read/write (-w) and non- destructive (-n) modes, one or more test patterns may be specified by specifying the -t option for each test pattern desired. For read-only mode only a single pattern may be specified and it may not be "random". Read-only testing with a pattern assumes that the specified pattern has previously been written to the disk - if not, large numbers of blocks will fail verification. If multiple patterns are specified then all blocks will be tested with one pattern before proceeding to the next pattern. -v Verbose mode. Will write the number of read errors, write errors and data- corruptions to stderr. -w Use write-mode test. With this option, badblocks scans for bad blocks by writing some patterns (0xaa, 0x55, 0xff, 0x00) on every block of the device, reading every block and comparing the contents. This option may not be combined with the -n option, as they are mutually exclusive. -B Use buffered I/O and do not use Direct I/O, even if it is available. -X Internal flag only to be used by e2fsck(8) and mke2fs(8). It bypasses the exclusive mode in-use device safety check. WARNING top Never use the -w option on a device containing an existing file system. This option erases data! If you want to do write-mode testing on an existing file system, use the -n option instead. It is slower, but it will preserve your data. The -e option will cause badblocks to output a possibly incomplete list of bad blocks. Therefore it is recommended to use it only when one wants to know if there are any bad blocks at all on the device, and not when the list of bad blocks is wanted. AUTHOR top badblocks was written by Remy Card <Remy.Card@linux.org>. Current maintainer is Theodore Ts'o <tytso@alum.mit.edu>. Non- destructive read/write test implemented by David Beattie <dbeattie@softhome.net>. AVAILABILITY top badblocks is part of the e2fsprogs package and is available from http://e2fsprogs.sourceforge.net. SEE ALSO top e2fsck(8), mke2fs(8) COLOPHON top This page is part of the e2fsprogs (utilities for ext2/3/4 filesystems) project. Information about the project can be found at http://e2fsprogs.sourceforge.net/. It is not known how to report bugs for this man page; if you know, please send a mail to man-pages@man7.org. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/fs/ext2/e2fsprogs.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-07.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org E2fsprogs version 1.47.0 February 2023 BADBLOCKS(8) Pages that refer to this page: e2fsck(8), mke2fs(8), mkfs(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # badblocks\n\n> Search a device for bad blocks.\n> Some usages of badblocks can cause destructive actions, such as erasing all data on a disk, including the partition table.\n> More information: <https://manned.org/badblocks>.\n\n- Search a disk for bad blocks by using a non-destructive read-only test:\n\n`sudo badblocks {{/dev/sdX}}`\n\n- Search an unmounted disk for bad blocks with a [n]on-destructive read-write test:\n\n`sudo badblocks -n {{/dev/sdX}}`\n\n- Search an unmounted disk for bad blocks with a destructive [w]rite test:\n\n`sudo badblocks -w {{/dev/sdX}}`\n\n- Use the destructive [w]rite test and [s]how [v]erbose progress:\n\n`sudo badblocks -svw {{/dev/sdX}}`\n\n- In destructive mode, [o]utput found blocks to a file:\n\n`sudo badblocks -o {{path/to/file}} -w {{/dev/sdX}}`\n\n- Use the destructive mode with improved speed using 4K [b]lock size and 64K block [c]ount:\n\n`sudo badblocks -w -b {{4096}} -c {{65536}} {{/dev/sdX}}`\n |
base32 | base32(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training base32(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | AUTHOR | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON BASE32(1) User Commands BASE32(1) NAME top base32 - base32 encode/decode data and print to standard output SYNOPSIS top base32 [OPTION]... [FILE] DESCRIPTION top Base32 encode or decode FILE, or standard input, to standard output. With no FILE, or when FILE is -, read standard input. Mandatory arguments to long options are mandatory for short options too. -d, --decode decode data -i, --ignore-garbage when decoding, ignore non-alphabet characters -w, --wrap=COLS wrap encoded lines after COLS character (default 76). Use 0 to disable line wrapping --help display this help and exit --version output version information and exit The data are encoded as described for the base32 alphabet in RFC 4648. When decoding, the input may contain newlines in addition to the bytes of the formal base32 alphabet. Use --ignore-garbage to attempt to recover from any other non-alphabet bytes in the encoded stream. AUTHOR top Written by Simon Josefsson. REPORTING BUGS top GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT top Copyright 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top basenc(1) Full documentation <https://www.gnu.org/software/coreutils/base32> or available locally via: info '(coreutils) base32 invocation' COLOPHON top This page is part of the coreutils (basic file, shell and text manipulation utilities) project. Information about the project can be found at http://www.gnu.org/software/coreutils/. If you have a bug report for this manual page, see http://www.gnu.org/software/coreutils/. This page was obtained from the tarball coreutils-9.4.tar.xz fetched from http://ftp.gnu.org/gnu/coreutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU coreutils 9.4 August 2023 BASE32(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # base32\n\n> Encode or decode file or `stdin` to/from Base32, to `stdout`.\n> More information: <https://www.gnu.org/software/coreutils/base32>.\n\n- Encode a file:\n\n`base32 {{path/to/file}}`\n\n- Wrap encoded output at a specific width (`0` disables wrapping):\n\n`base32 --wrap {{0|76|...}} {{path/to/file}}`\n\n- Decode a file:\n\n`base32 --decode {{path/to/file}}`\n\n- Encode from `stdin`:\n\n`{{somecommand}} | base32`\n\n- Decode from `stdin`:\n\n`{{somecommand}} | base32 --decode`\n |
base64 | base64(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training base64(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | AUTHOR | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON BASE64(1) User Commands BASE64(1) NAME top base64 - base64 encode/decode data and print to standard output SYNOPSIS top base64 [OPTION]... [FILE] DESCRIPTION top Base64 encode or decode FILE, or standard input, to standard output. With no FILE, or when FILE is -, read standard input. Mandatory arguments to long options are mandatory for short options too. -d, --decode decode data -i, --ignore-garbage when decoding, ignore non-alphabet characters -w, --wrap=COLS wrap encoded lines after COLS character (default 76). Use 0 to disable line wrapping --help display this help and exit --version output version information and exit The data are encoded as described for the base64 alphabet in RFC 4648. When decoding, the input may contain newlines in addition to the bytes of the formal base64 alphabet. Use --ignore-garbage to attempt to recover from any other non-alphabet bytes in the encoded stream. AUTHOR top Written by Simon Josefsson. REPORTING BUGS top GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT top Copyright 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top basenc(1) Full documentation <https://www.gnu.org/software/coreutils/base64> or available locally via: info '(coreutils) base64 invocation' COLOPHON top This page is part of the coreutils (basic file, shell and text manipulation utilities) project. Information about the project can be found at http://www.gnu.org/software/coreutils/. If you have a bug report for this manual page, see http://www.gnu.org/software/coreutils/. This page was obtained from the tarball coreutils-9.4.tar.xz fetched from http://ftp.gnu.org/gnu/coreutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU coreutils 9.4 August 2023 BASE64(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # base64\n\n> Encode or decode file or `stdin` to/from Base64, to `stdout`.\n> More information: <https://www.gnu.org/software/coreutils/base64>.\n\n- Encode the contents of a file as base64 and write the result to `stdout`:\n\n`base64 {{path/to/file}}`\n\n- Wrap encoded output at a specific width (`0` disables wrapping):\n\n`base64 --wrap {{0|76|...}} {{path/to/file}}`\n\n- Decode the base64 contents of a file and write the result to `stdout`:\n\n`base64 --decode {{path/to/file}}`\n\n- Encode from `stdin`:\n\n`{{somecommand}} | base64`\n\n- Decode from `stdin`:\n\n`{{somecommand}} | base64 --decode`\n |
basename | basename(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training basename(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | EXAMPLES | AUTHOR | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON BASENAME(1) User Commands BASENAME(1) NAME top basename - strip directory and suffix from filenames SYNOPSIS top basename NAME [SUFFIX] basename OPTION... NAME... DESCRIPTION top Print NAME with any leading directory components removed. If specified, also remove a trailing SUFFIX. Mandatory arguments to long options are mandatory for short options too. -a, --multiple support multiple arguments and treat each as a NAME -s, --suffix=SUFFIX remove a trailing SUFFIX; implies -a -z, --zero end each output line with NUL, not newline --help display this help and exit --version output version information and exit EXAMPLES top basename /usr/bin/sort -> "sort" basename include/stdio.h .h -> "stdio" basename -s .h include/stdio.h -> "stdio" basename -a any/str1 any/str2 -> "str1" followed by "str2" AUTHOR top Written by David MacKenzie. REPORTING BUGS top GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT top Copyright 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top dirname(1), readlink(1) Full documentation <https://www.gnu.org/software/coreutils/basename> or available locally via: info '(coreutils) basename invocation' COLOPHON top This page is part of the coreutils (basic file, shell and text manipulation utilities) project. Information about the project can be found at http://www.gnu.org/software/coreutils/. If you have a bug report for this manual page, see http://www.gnu.org/software/coreutils/. This page was obtained from the tarball coreutils-9.4.tar.xz fetched from http://ftp.gnu.org/gnu/coreutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU coreutils 9.4 August 2023 BASENAME(1) Pages that refer to this page: dirname(1), pmsignal(1), basename(3) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # basename\n\n> Remove leading directory portions from a path.\n> More information: <https://www.gnu.org/software/coreutils/basename>.\n\n- Show only the file name from a path:\n\n`basename {{path/to/file}}`\n\n- Show only the rightmost directory name from a path:\n\n`basename {{path/to/directory/}}`\n\n- Show only the file name from a path, with a suffix removed:\n\n`basename {{path/to/file}} {{suffix}}`\n |
basenc | basenc(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training basenc(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | ENCODING EXAMPLES | AUTHOR | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON BASENC(1) User Commands BASENC(1) NAME top basenc - Encode/decode data and print to standard output SYNOPSIS top basenc [OPTION]... [FILE] DESCRIPTION top basenc encode or decode FILE, or standard input, to standard output. With no FILE, or when FILE is -, read standard input. Mandatory arguments to long options are mandatory for short options too. --base64 same as 'base64' program (RFC4648 section 4) --base64url file- and url-safe base64 (RFC4648 section 5) --base32 same as 'base32' program (RFC4648 section 6) --base32hex extended hex alphabet base32 (RFC4648 section 7) --base16 hex encoding (RFC4648 section 8) --base2msbf bit string with most significant bit (msb) first --base2lsbf bit string with least significant bit (lsb) first -d, --decode decode data -i, --ignore-garbage when decoding, ignore non-alphabet characters -w, --wrap=COLS wrap encoded lines after COLS character (default 76). Use 0 to disable line wrapping --z85 ascii85-like encoding (ZeroMQ spec:32/Z85); when encoding, input length must be a multiple of 4; when decoding, input length must be a multiple of 5 --help display this help and exit --version output version information and exit When decoding, the input may contain newlines in addition to the bytes of the formal alphabet. Use --ignore-garbage to attempt to recover from any other non-alphabet bytes in the encoded stream. ENCODING EXAMPLES top $ printf '\376\117\202' | basenc --base64 /k+C $ printf '\376\117\202' | basenc --base64url _k-C $ printf '\376\117\202' | basenc --base32 7ZHYE=== $ printf '\376\117\202' | basenc --base32hex VP7O4=== $ printf '\376\117\202' | basenc --base16 FE4F82 $ printf '\376\117\202' | basenc --base2lsbf 011111111111001001000001 $ printf '\376\117\202' | basenc --base2msbf 111111100100111110000010 $ printf '\376\117\202\000' | basenc --z85 @.FaC AUTHOR top Written by Simon Josefsson and Assaf Gordon. REPORTING BUGS top GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT top Copyright 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top Full documentation <https://www.gnu.org/software/coreutils/basenc> or available locally via: info '(coreutils) basenc invocation' COLOPHON top This page is part of the coreutils (basic file, shell and text manipulation utilities) project. Information about the project can be found at http://www.gnu.org/software/coreutils/. If you have a bug report for this manual page, see http://www.gnu.org/software/coreutils/. This page was obtained from the tarball coreutils-9.4.tar.xz fetched from http://ftp.gnu.org/gnu/coreutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU coreutils 9.4 August 2023 BASENC(1) Pages that refer to this page: base32(1), base64(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # basenc\n\n> Encode or decode file or `stdin` using a specified encoding, to `stdout`.\n> More information: <https://www.gnu.org/software/coreutils/basenc>.\n\n- Encode a file with base64 encoding:\n\n`basenc --base64 {{path/to/file}}`\n\n- Decode a file with base64 encoding:\n\n`basenc --decode --base64 {{path/to/file}}`\n\n- Encode from `stdin` with base32 encoding with 42 columns:\n\n`{{command}} | basenc --base32 -w42`\n\n- Encode from `stdin` with base32 encoding:\n\n`{{command}} | basenc --base32`\n |
bash | bash(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training bash(1) Linux manual page NAME | SYNOPSIS | COPYRIGHT | DESCRIPTION | OPTIONS | ARGUMENTS | INVOCATION | DEFINITIONS | RESERVED WORDS | SHELL GRAMMAR | COMMENTS | QUOTING | PARAMETERS | EXPANSION | REDIRECTION | ALIASES | FUNCTIONS | ARITHMETIC EVALUATION | CONDITIONAL EXPRESSIONS | SIMPLE COMMAND EXPANSION | COMMAND EXECUTION | COMMAND EXECUTION ENVIRONMENT | ENVIRONMENT | EXIT STATUS | SIGNALS | JOB CONTROL | PROMPTING | READLINE | HISTORY | HISTORY EXPANSION | SHELL BUILTIN COMMANDS | SHELL COMPATIBILITY MODE | RESTRICTED SHELL | SEE ALSO | FILES | AUTHORS | BUG REPORTS | BUGS | COLOPHON BASH(1) General Commands Manual BASH(1) NAME top bash - GNU Bourne-Again SHell SYNOPSIS top bash [options] [command_string | file] COPYRIGHT top Bash is Copyright (C) 1989-2022 by the Free Software Foundation, Inc. DESCRIPTION top Bash is an sh-compatible command language interpreter that executes commands read from the standard input or from a file. Bash also incorporates useful features from the Korn and C shells (ksh and csh). Bash is intended to be a conformant implementation of the Shell and Utilities portion of the IEEE POSIX specification (IEEE Standard 1003.1). Bash can be configured to be POSIX-conformant by default. OPTIONS top All of the single-character shell options documented in the description of the set builtin command, including -o, can be used as options when the shell is invoked. In addition, bash interprets the following options when it is invoked: -c If the -c option is present, then commands are read from the first non-option argument command_string. If there are arguments after the command_string, the first argument is assigned to $0 and any remaining arguments are assigned to the positional parameters. The assignment to $0 sets the name of the shell, which is used in warning and error messages. -i If the -i option is present, the shell is interactive. -l Make bash act as if it had been invoked as a login shell (see INVOCATION below). -r If the -r option is present, the shell becomes restricted (see RESTRICTED SHELL below). -s If the -s option is present, or if no arguments remain after option processing, then commands are read from the standard input. This option allows the positional parameters to be set when invoking an interactive shell or when reading input through a pipe. -D A list of all double-quoted strings preceded by $ is printed on the standard output. These are the strings that are subject to language translation when the current locale is not C or POSIX. This implies the -n option; no commands will be executed. [-+]O [shopt_option] shopt_option is one of the shell options accepted by the shopt builtin (see SHELL BUILTIN COMMANDS below). If shopt_option is present, -O sets the value of that option; +O unsets it. If shopt_option is not supplied, the names and values of the shell options accepted by shopt are printed on the standard output. If the invocation option is +O, the output is displayed in a format that may be reused as input. -- A -- signals the end of options and disables further option processing. Any arguments after the -- are treated as filenames and arguments. An argument of - is equivalent to --. Bash also interprets a number of multi-character options. These options must appear on the command line before the single- character options to be recognized. --debugger Arrange for the debugger profile to be executed before the shell starts. Turns on extended debugging mode (see the description of the extdebug option to the shopt builtin below). --dump-po-strings Equivalent to -D, but the output is in the GNU gettext po (portable object) file format. --dump-strings Equivalent to -D. --help Display a usage message on standard output and exit successfully. --init-file file --rcfile file Execute commands from file instead of the standard personal initialization file ~/.bashrc if the shell is interactive (see INVOCATION below). --login Equivalent to -l. --noediting Do not use the GNU readline library to read command lines when the shell is interactive. --noprofile Do not read either the system-wide startup file /etc/profile or any of the personal initialization files ~/.bash_profile, ~/.bash_login, or ~/.profile. By default, bash reads these files when it is invoked as a login shell (see INVOCATION below). --norc Do not read and execute the personal initialization file ~/.bashrc if the shell is interactive. This option is on by default if the shell is invoked as sh. --posix Change the behavior of bash where the default operation differs from the POSIX standard to match the standard (posix mode). See SEE ALSO below for a reference to a document that details how posix mode affects bash's behavior. --restricted The shell becomes restricted (see RESTRICTED SHELL below). --verbose Equivalent to -v. --version Show version information for this instance of bash on the standard output and exit successfully. ARGUMENTS top If arguments remain after option processing, and neither the -c nor the -s option has been supplied, the first argument is assumed to be the name of a file containing shell commands. If bash is invoked in this fashion, $0 is set to the name of the file, and the positional parameters are set to the remaining arguments. Bash reads and executes commands from this file, then exits. Bash's exit status is the exit status of the last command executed in the script. If no commands are executed, the exit status is 0. An attempt is first made to open the file in the current directory, and, if no file is found, then the shell searches the directories in PATH for the script. INVOCATION top A login shell is one whose first character of argument zero is a -, or one started with the --login option. An interactive shell is one started without non-option arguments (unless -s is specified) and without the -c option, whose standard input and error are both connected to terminals (as determined by isatty(3)), or one started with the -i option. PS1 is set and $- includes i if bash is interactive, allowing a shell script or a startup file to test this state. The following paragraphs describe how bash executes its startup files. If any of the files exist but cannot be read, bash reports an error. Tildes are expanded in filenames as described below under Tilde Expansion in the EXPANSION section. When bash is invoked as an interactive login shell, or as a non- interactive shell with the --login option, it first reads and executes commands from the file /etc/profile, if that file exists. After reading that file, it looks for ~/.bash_profile, ~/.bash_login, and ~/.profile, in that order, and reads and executes commands from the first one that exists and is readable. The --noprofile option may be used when the shell is started to inhibit this behavior. When an interactive login shell exits, or a non-interactive login shell executes the exit builtin command, bash reads and executes commands from the file ~/.bash_logout, if it exists. When an interactive shell that is not a login shell is started, bash reads and executes commands from ~/.bashrc, if that file exists. This may be inhibited by using the --norc option. The --rcfile file option will force bash to read and execute commands from file instead of ~/.bashrc. When bash is started non-interactively, to run a shell script, for example, it looks for the variable BASH_ENV in the environment, expands its value if it appears there, and uses the expanded value as the name of a file to read and execute. Bash behaves as if the following command were executed: if [ -n "$BASH_ENV" ]; then . "$BASH_ENV"; fi but the value of the PATH variable is not used to search for the filename. If bash is invoked with the name sh, it tries to mimic the startup behavior of historical versions of sh as closely as possible, while conforming to the POSIX standard as well. When invoked as an interactive login shell, or a non-interactive shell with the --login option, it first attempts to read and execute commands from /etc/profile and ~/.profile, in that order. The --noprofile option may be used to inhibit this behavior. When invoked as an interactive shell with the name sh, bash looks for the variable ENV, expands its value if it is defined, and uses the expanded value as the name of a file to read and execute. Since a shell invoked as sh does not attempt to read and execute commands from any other startup files, the --rcfile option has no effect. A non-interactive shell invoked with the name sh does not attempt to read any other startup files. When invoked as sh, bash enters posix mode after the startup files are read. When bash is started in posix mode, as with the --posix command line option, it follows the POSIX standard for startup files. In this mode, interactive shells expand the ENV variable and commands are read and executed from the file whose name is the expanded value. No other startup files are read. Bash attempts to determine when it is being run with its standard input connected to a network connection, as when executed by the historical remote shell daemon, usually rshd, or the secure shell daemon sshd. If bash determines it is being run non- interactively in this fashion, it reads and executes commands from ~/.bashrc, if that file exists and is readable. It will not do this if invoked as sh. The --norc option may be used to inhibit this behavior, and the --rcfile option may be used to force another file to be read, but neither rshd nor sshd generally invoke the shell with those options or allow them to be specified. If the shell is started with the effective user (group) id not equal to the real user (group) id, and the -p option is not supplied, no startup files are read, shell functions are not inherited from the environment, the SHELLOPTS, BASHOPTS, CDPATH, and GLOBIGNORE variables, if they appear in the environment, are ignored, and the effective user id is set to the real user id. If the -p option is supplied at invocation, the startup behavior is the same, but the effective user id is not reset. DEFINITIONS top The following definitions are used throughout the rest of this document. blank A space or tab. word A sequence of characters considered as a single unit by the shell. Also known as a token. name A word consisting only of alphanumeric characters and underscores, and beginning with an alphabetic character or an underscore. Also referred to as an identifier. metacharacter A character that, when unquoted, separates words. One of the following: | & ; ( ) < > space tab newline control operator A token that performs a control function. It is one of the following symbols: || & && ; ;; ;& ;;& ( ) | |& <newline> RESERVED WORDS top Reserved words are words that have a special meaning to the shell. The following words are recognized as reserved when unquoted and either the first word of a command (see SHELL GRAMMAR below), the third word of a case or select command (only in is valid), or the third word of a for command (only in and do are valid): ! case coproc do done elif else esac fi for function if in select then until while { } time [[ ]] SHELL GRAMMAR top This section describes the syntax of the various forms of shell commands. Simple Commands A simple command is a sequence of optional variable assignments followed by blank-separated words and redirections, and terminated by a control operator. The first word specifies the command to be executed, and is passed as argument zero. The remaining words are passed as arguments to the invoked command. The return value of a simple command is its exit status, or 128+n if the command is terminated by signal n. Pipelines A pipeline is a sequence of one or more commands separated by one of the control operators | or |&. The format for a pipeline is: [time [-p]] [ ! ] command1 [ [||&] command2 ... ] The standard output of command1 is connected via a pipe to the standard input of command2. This connection is performed before any redirections specified by the command1(see REDIRECTION below). If |& is used, command1's standard error, in addition to its standard output, is connected to command2's standard input through the pipe; it is shorthand for 2>&1 |. This implicit redirection of the standard error to the standard output is performed after any redirections specified by command1. The return status of a pipeline is the exit status of the last command, unless the pipefail option is enabled. If pipefail is enabled, the pipeline's return status is the value of the last (rightmost) command to exit with a non-zero status, or zero if all commands exit successfully. If the reserved word ! precedes a pipeline, the exit status of that pipeline is the logical negation of the exit status as described above. The shell waits for all commands in the pipeline to terminate before returning a value. If the time reserved word precedes a pipeline, the elapsed as well as user and system time consumed by its execution are reported when the pipeline terminates. The -p option changes the output format to that specified by POSIX. When the shell is in posix mode, it does not recognize time as a reserved word if the next token begins with a `-'. The TIMEFORMAT variable may be set to a format string that specifies how the timing information should be displayed; see the description of TIMEFORMAT under Shell Variables below. When the shell is in posix mode, time may be followed by a newline. In this case, the shell displays the total user and system time consumed by the shell and its children. The TIMEFORMAT variable may be used to specify the format of the time information. Each command in a multi-command pipeline, where pipes are created, is executed in a subshell, which is a separate process. See COMMAND EXECUTION ENVIRONMENT for a description of subshells and a subshell environment. If the lastpipe option is enabled using the shopt builtin (see the description of shopt below), the last element of a pipeline may be run by the shell process when job control is not active. Lists A list is a sequence of one or more pipelines separated by one of the operators ;, &, &&, or ||, and optionally terminated by one of ;, &, or <newline>. Of these list operators, && and || have equal precedence, followed by ; and &, which have equal precedence. A sequence of one or more newlines may appear in a list instead of a semicolon to delimit commands. If a command is terminated by the control operator &, the shell executes the command in the background in a subshell. The shell does not wait for the command to finish, and the return status is 0. These are referred to as asynchronous commands. Commands separated by a ; are executed sequentially; the shell waits for each command to terminate in turn. The return status is the exit status of the last command executed. AND and OR lists are sequences of one or more pipelines separated by the && and || control operators, respectively. AND and OR lists are executed with left associativity. An AND list has the form command1 && command2 command2 is executed if, and only if, command1 returns an exit status of zero (success). An OR list has the form command1 || command2 command2 is executed if, and only if, command1 returns a non-zero exit status. The return status of AND and OR lists is the exit status of the last command executed in the list. Compound Commands A compound command is one of the following. In most cases a list in a command's description may be separated from the rest of the command by one or more newlines, and may be followed by a newline in place of a semicolon. (list) list is executed in a subshell (see COMMAND EXECUTION ENVIRONMENT below for a description of a subshell environment). Variable assignments and builtin commands that affect the shell's environment do not remain in effect after the command completes. The return status is the exit status of list. { list; } list is simply executed in the current shell environment. list must be terminated with a newline or semicolon. This is known as a group command. The return status is the exit status of list. Note that unlike the metacharacters ( and ), { and } are reserved words and must occur where a reserved word is permitted to be recognized. Since they do not cause a word break, they must be separated from list by whitespace or another shell metacharacter. ((expression)) The expression is evaluated according to the rules described below under ARITHMETIC EVALUATION. If the value of the expression is non-zero, the return status is 0; otherwise the return status is 1. The expression undergoes the same expansions as if it were within double quotes, but double quote characters in expression are not treated specially and are removed. [[ expression ]] Return a status of 0 or 1 depending on the evaluation of the conditional expression expression. Expressions are composed of the primaries described below under CONDITIONAL EXPRESSIONS. The words between the [[ and ]] do not undergo word splitting and pathname expansion. The shell performs tilde expansion, parameter and variable expansion, arithmetic expansion, command substitution, process substitution, and quote removal on those words (the expansions that would occur if the words were enclosed in double quotes). Conditional operators such as -f must be unquoted to be recognized as primaries. When used with [[, the < and > operators sort lexicographically using the current locale. When the == and != operators are used, the string to the right of the operator is considered a pattern and matched according to the rules described below under Pattern Matching, as if the extglob shell option were enabled. The = operator is equivalent to ==. If the nocasematch shell option is enabled, the match is performed without regard to the case of alphabetic characters. The return value is 0 if the string matches (==) or does not match (!=) the pattern, and 1 otherwise. Any part of the pattern may be quoted to force the quoted portion to be matched as a string. An additional binary operator, =~, is available, with the same precedence as == and !=. When it is used, the string to the right of the operator is considered a POSIX extended regular expression and matched accordingly (using the POSIX regcomp and regexec interfaces usually described in regex(3)). The return value is 0 if the string matches the pattern, and 1 otherwise. If the regular expression is syntactically incorrect, the conditional expression's return value is 2. If the nocasematch shell option is enabled, the match is performed without regard to the case of alphabetic characters. If any part of the pattern is quoted, the quoted portion is matched literally. This means every character in the quoted portion matches itself, instead of having any special pattern matching meaning. If the pattern is stored in a shell variable, quoting the variable expansion forces the entire pattern to be matched literally. Treat bracket expressions in regular expressions carefully, since normal quoting and pattern characters lose their meanings between brackets. The pattern will match if it matches any part of the string. Anchor the pattern using the ^ and $ regular expression operators to force it to match the entire string. The array variable BASH_REMATCH records which parts of the string matched the pattern. The element of BASH_REMATCH with index 0 contains the portion of the string matching the entire regular expression. Substrings matched by parenthesized subexpressions within the regular expression are saved in the remaining BASH_REMATCH indices. The element of BASH_REMATCH with index n is the portion of the string matching the nth parenthesized subexpression. Bash sets BASH_REMATCH in the global scope; declaring it as a local variable will lead to unexpected results. Expressions may be combined using the following operators, listed in decreasing order of precedence: ( expression ) Returns the value of expression. This may be used to override the normal precedence of operators. ! expression True if expression is false. expression1 && expression2 True if both expression1 and expression2 are true. expression1 || expression2 True if either expression1 or expression2 is true. The && and || operators do not evaluate expression2 if the value of expression1 is sufficient to determine the return value of the entire conditional expression. for name [ [ in [ word ... ] ] ; ] do list ; done The list of words following in is expanded, generating a list of items. The variable name is set to each element of this list in turn, and list is executed each time. If the in word is omitted, the for command executes list once for each positional parameter that is set (see PARAMETERS below). The return status is the exit status of the last command that executes. If the expansion of the items following in results in an empty list, no commands are executed, and the return status is 0. for (( expr1 ; expr2 ; expr3 )) ; do list ; done First, the arithmetic expression expr1 is evaluated according to the rules described below under ARITHMETIC EVALUATION. The arithmetic expression expr2 is then evaluated repeatedly until it evaluates to zero. Each time expr2 evaluates to a non-zero value, list is executed and the arithmetic expression expr3 is evaluated. If any expression is omitted, it behaves as if it evaluates to 1. The return value is the exit status of the last command in list that is executed, or false if any of the expressions is invalid. select name [ in word ] ; do list ; done The list of words following in is expanded, generating a list of items, and the set of expanded words is printed on the standard error, each preceded by a number. If the in word is omitted, the positional parameters are printed (see PARAMETERS below). select then displays the PS3 prompt and reads a line from the standard input. If the line consists of a number corresponding to one of the displayed words, then the value of name is set to that word. If the line is empty, the words and prompt are displayed again. If EOF is read, the select command completes and returns 1. Any other value read causes name to be set to null. The line read is saved in the variable REPLY. The list is executed after each selection until a break command is executed. The exit status of select is the exit status of the last command executed in list, or zero if no commands were executed. case word in [ [(] pattern [ | pattern ] ... ) list ;; ] ... esac A case command first expands word, and tries to match it against each pattern in turn, using the matching rules described under Pattern Matching below. The word is expanded using tilde expansion, parameter and variable expansion, arithmetic expansion, command substitution, process substitution and quote removal. Each pattern examined is expanded using tilde expansion, parameter and variable expansion, arithmetic expansion, command substitution, process substitution, and quote removal. If the nocasematch shell option is enabled, the match is performed without regard to the case of alphabetic characters. When a match is found, the corresponding list is executed. If the ;; operator is used, no subsequent matches are attempted after the first pattern match. Using ;& in place of ;; causes execution to continue with the list associated with the next set of patterns. Using ;;& in place of ;; causes the shell to test the next pattern list in the statement, if any, and execute any associated list on a successful match, continuing the case statement execution as if the pattern list had not matched. The exit status is zero if no pattern matches. Otherwise, it is the exit status of the last command executed in list. if list; then list; [ elif list; then list; ] ... [ else list; ] fi The if list is executed. If its exit status is zero, the then list is executed. Otherwise, each elif list is executed in turn, and if its exit status is zero, the corresponding then list is executed and the command completes. Otherwise, the else list is executed, if present. The exit status is the exit status of the last command executed, or zero if no condition tested true. while list-1; do list-2; done until list-1; do list-2; done The while command continuously executes the list list-2 as long as the last command in the list list-1 returns an exit status of zero. The until command is identical to the while command, except that the test is negated: list-2 is executed as long as the last command in list-1 returns a non-zero exit status. The exit status of the while and until commands is the exit status of the last command executed in list-2, or zero if none was executed. Coprocesses A coprocess is a shell command preceded by the coproc reserved word. A coprocess is executed asynchronously in a subshell, as if the command had been terminated with the & control operator, with a two-way pipe established between the executing shell and the coprocess. The syntax for a coprocess is: coproc [NAME] command [redirections] This creates a coprocess named NAME. command may be either a simple command or a compound command (see above). NAME is a shell variable name. If NAME is not supplied, the default name is COPROC. The recommended form to use for a coprocess is coproc NAME { command [redirections]; } This form is recommended because simple commands result in the coprocess always being named COPROC, and it is simpler to use and more complete than the other compound commands. If command is a compound command, NAME is optional. The word following coproc determines whether that word is interpreted as a variable name: it is interpreted as NAME if it is not a reserved word that introduces a compound command. If command is a simple command, NAME is not allowed; this is to avoid confusion between NAME and the first word of the simple command. When the coprocess is executed, the shell creates an array variable (see Arrays below) named NAME in the context of the executing shell. The standard output of command is connected via a pipe to a file descriptor in the executing shell, and that file descriptor is assigned to NAME[0]. The standard input of command is connected via a pipe to a file descriptor in the executing shell, and that file descriptor is assigned to NAME[1]. This pipe is established before any redirections specified by the command (see REDIRECTION below). The file descriptors can be utilized as arguments to shell commands and redirections using standard word expansions. Other than those created to execute command and process substitutions, the file descriptors are not available in subshells. The process ID of the shell spawned to execute the coprocess is available as the value of the variable NAME_PID. The wait builtin command may be used to wait for the coprocess to terminate. Since the coprocess is created as an asynchronous command, the coproc command always returns success. The return status of a coprocess is the exit status of command. Shell Function Definitions A shell function is an object that is called like a simple command and executes a compound command with a new set of positional parameters. Shell functions are declared as follows: fname () compound-command [redirection] function fname [()] compound-command [redirection] This defines a function named fname. The reserved word function is optional. If the function reserved word is supplied, the parentheses are optional. The body of the function is the compound command compound-command (see Compound Commands above). That command is usually a list of commands between { and }, but may be any command listed under Compound Commands above. If the function reserved word is used, but the parentheses are not supplied, the braces are recommended. compound-command is executed whenever fname is specified as the name of a simple command. When in posix mode, fname must be a valid shell name and may not be the name of one of the POSIX special builtins. In default mode, a function name can be any unquoted shell word that does not contain $. Any redirections (see REDIRECTION below) specified when a function is defined are performed when the function is executed. The exit status of a function definition is zero unless a syntax error occurs or a readonly function with the same name already exists. When executed, the exit status of a function is the exit status of the last command executed in the body. (See FUNCTIONS below.) COMMENTS top In a non-interactive shell, or an interactive shell in which the interactive_comments option to the shopt builtin is enabled (see SHELL BUILTIN COMMANDS below), a word beginning with # causes that word and all remaining characters on that line to be ignored. An interactive shell without the interactive_comments option enabled does not allow comments. The interactive_comments option is on by default in interactive shells. QUOTING top Quoting is used to remove the special meaning of certain characters or words to the shell. Quoting can be used to disable special treatment for special characters, to prevent reserved words from being recognized as such, and to prevent parameter expansion. Each of the metacharacters listed above under DEFINITIONS has special meaning to the shell and must be quoted if it is to represent itself. When the command history expansion facilities are being used (see HISTORY EXPANSION below), the history expansion character, usually !, must be quoted to prevent history expansion. There are three quoting mechanisms: the escape character, single quotes, and double quotes. A non-quoted backslash (\) is the escape character. It preserves the literal value of the next character that follows, with the exception of <newline>. If a \<newline> pair appears, and the backslash is not itself quoted, the \<newline> is treated as a line continuation (that is, it is removed from the input stream and effectively ignored). Enclosing characters in single quotes preserves the literal value of each character within the quotes. A single quote may not occur between single quotes, even when preceded by a backslash. Enclosing characters in double quotes preserves the literal value of all characters within the quotes, with the exception of $, `, \, and, when history expansion is enabled, !. When the shell is in posix mode, the ! has no special meaning within double quotes, even when history expansion is enabled. The characters $ and ` retain their special meaning within double quotes. The backslash retains its special meaning only when followed by one of the following characters: $, `, ", \, or <newline>. A double quote may be quoted within double quotes by preceding it with a backslash. If enabled, history expansion will be performed unless an ! appearing in double quotes is escaped using a backslash. The backslash preceding the ! is not removed. The special parameters * and @ have special meaning when in double quotes (see PARAMETERS below). Character sequences of the form $'string' are treated as a special variant of single quotes. The sequence expands to string, with backslash-escaped characters in string replaced as specified by the ANSI C standard. Backslash escape sequences, if present, are decoded as follows: \a alert (bell) \b backspace \e \E an escape character \f form feed \n new line \r carriage return \t horizontal tab \v vertical tab \\ backslash \' single quote \" double quote \? question mark \nnn the eight-bit character whose value is the octal value nnn (one to three octal digits) \xHH the eight-bit character whose value is the hexadecimal value HH (one or two hex digits) \uHHHH the Unicode (ISO/IEC 10646) character whose value is the hexadecimal value HHHH (one to four hex digits) \UHHHHHHHH the Unicode (ISO/IEC 10646) character whose value is the hexadecimal value HHHHHHHH (one to eight hex digits) \cx a control-x character The expanded result is single-quoted, as if the dollar sign had not been present. A double-quoted string preceded by a dollar sign ($"string") will cause the string to be translated according to the current locale. The gettext infrastructure performs the lookup and translation, using the LC_MESSAGES, TEXTDOMAINDIR, and TEXTDOMAIN shell variables. If the current locale is C or POSIX, if there are no translations available, or if the string is not translated, the dollar sign is ignored. This is a form of double quoting, so the string remains double-quoted by default, whether or not it is translated and replaced. If the noexpand_translation option is enabled using the shopt builtin, translated strings are single-quoted instead of double-quoted. See the description of shopt below under SHELLBUILTINCOMMANDS. PARAMETERS top A parameter is an entity that stores values. It can be a name, a number, or one of the special characters listed below under Special Parameters. A variable is a parameter denoted by a name. A variable has a value and zero or more attributes. Attributes are assigned using the declare builtin command (see declare below in SHELL BUILTIN COMMANDS). A parameter is set if it has been assigned a value. The null string is a valid value. Once a variable is set, it may be unset only by using the unset builtin command (see SHELL BUILTIN COMMANDS below). A variable may be assigned to by a statement of the form name=[value] If value is not given, the variable is assigned the null string. All values undergo tilde expansion, parameter and variable expansion, command substitution, arithmetic expansion, and quote removal (see EXPANSION below). If the variable has its integer attribute set, then value is evaluated as an arithmetic expression even if the $((...)) expansion is not used (see Arithmetic Expansion below). Word splitting and pathname expansion are not performed. Assignment statements may also appear as arguments to the alias, declare, typeset, export, readonly, and local builtin commands (declaration commands). When in posix mode, these builtins may appear in a command after one or more instances of the command builtin and retain these assignment statement properties. In the context where an assignment statement is assigning a value to a shell variable or array index, the += operator can be used to append to or add to the variable's previous value. This includes arguments to builtin commands such as declare that accept assignment statements (declaration commands). When += is applied to a variable for which the integer attribute has been set, value is evaluated as an arithmetic expression and added to the variable's current value, which is also evaluated. When += is applied to an array variable using compound assignment (see Arrays below), the variable's value is not unset (as it is when using =), and new values are appended to the array beginning at one greater than the array's maximum index (for indexed arrays) or added as additional key-value pairs in an associative array. When applied to a string-valued variable, value is expanded and appended to the variable's value. A variable can be assigned the nameref attribute using the -n option to the declare or local builtin commands (see the descriptions of declare and local below) to create a nameref, or a reference to another variable. This allows variables to be manipulated indirectly. Whenever the nameref variable is referenced, assigned to, unset, or has its attributes modified (other than using or changing the nameref attribute itself), the operation is actually performed on the variable specified by the nameref variable's value. A nameref is commonly used within shell functions to refer to a variable whose name is passed as an argument to the function. For instance, if a variable name is passed to a shell function as its first argument, running declare -n ref=$1 inside the function creates a nameref variable ref whose value is the variable name passed as the first argument. References and assignments to ref, and changes to its attributes, are treated as references, assignments, and attribute modifications to the variable whose name was passed as $1. If the control variable in a for loop has the nameref attribute, the list of words can be a list of shell variables, and a name reference will be established for each word in the list, in turn, when the loop is executed. Array variables cannot be given the nameref attribute. However, nameref variables can reference array variables and subscripted array variables. Namerefs can be unset using the -n option to the unset builtin. Otherwise, if unset is executed with the name of a nameref variable as an argument, the variable referenced by the nameref variable will be unset. Positional Parameters A positional parameter is a parameter denoted by one or more digits, other than the single digit 0. Positional parameters are assigned from the shell's arguments when it is invoked, and may be reassigned using the set builtin command. Positional parameters may not be assigned to with assignment statements. The positional parameters are temporarily replaced when a shell function is executed (see FUNCTIONS below). When a positional parameter consisting of more than a single digit is expanded, it must be enclosed in braces (see EXPANSION below). Special Parameters The shell treats several parameters specially. These parameters may only be referenced; assignment to them is not allowed. * Expands to the positional parameters, starting from one. When the expansion is not within double quotes, each positional parameter expands to a separate word. In contexts where it is performed, those words are subject to further word splitting and pathname expansion. When the expansion occurs within double quotes, it expands to a single word with the value of each parameter separated by the first character of the IFS special variable. That is, "$*" is equivalent to "$1c$2c...", where c is the first character of the value of the IFS variable. If IFS is unset, the parameters are separated by spaces. If IFS is null, the parameters are joined without intervening separators. @ Expands to the positional parameters, starting from one. In contexts where word splitting is performed, this expands each positional parameter to a separate word; if not within double quotes, these words are subject to word splitting. In contexts where word splitting is not performed, this expands to a single word with each positional parameter separated by a space. When the expansion occurs within double quotes, each parameter expands to a separate word. That is, "$@" is equivalent to "$1" "$2" ... If the double-quoted expansion occurs within a word, the expansion of the first parameter is joined with the beginning part of the original word, and the expansion of the last parameter is joined with the last part of the original word. When there are no positional parameters, "$@" and $@ expand to nothing (i.e., they are removed). # Expands to the number of positional parameters in decimal. ? Expands to the exit status of the most recently executed foreground pipeline. - Expands to the current option flags as specified upon invocation, by the set builtin command, or those set by the shell itself (such as the -i option). $ Expands to the process ID of the shell. In a subshell, it expands to the process ID of the current shell, not the subshell. ! Expands to the process ID of the job most recently placed into the background, whether executed as an asynchronous command or using the bg builtin (see JOB CONTROL below). 0 Expands to the name of the shell or shell script. This is set at shell initialization. If bash is invoked with a file of commands, $0 is set to the name of that file. If bash is started with the -c option, then $0 is set to the first argument after the string to be executed, if one is present. Otherwise, it is set to the filename used to invoke bash, as given by argument zero. Shell Variables The following variables are set by the shell: _ At shell startup, set to the pathname used to invoke the shell or shell script being executed as passed in the environment or argument list. Subsequently, expands to the last argument to the previous simple command executed in the foreground, after expansion. Also set to the full pathname used to invoke each command executed and placed in the environment exported to that command. When checking mail, this parameter holds the name of the mail file currently being checked. BASH Expands to the full filename used to invoke this instance of bash. BASHOPTS A colon-separated list of enabled shell options. Each word in the list is a valid argument for the -s option to the shopt builtin command (see SHELL BUILTIN COMMANDS below). The options appearing in BASHOPTS are those reported as on by shopt. If this variable is in the environment when bash starts up, each shell option in the list will be enabled before reading any startup files. This variable is read-only. BASHPID Expands to the process ID of the current bash process. This differs from $$ under certain circumstances, such as subshells that do not require bash to be re-initialized. Assignments to BASHPID have no effect. If BASHPID is unset, it loses its special properties, even if it is subsequently reset. BASH_ALIASES An associative array variable whose members correspond to the internal list of aliases as maintained by the alias builtin. Elements added to this array appear in the alias list; however, unsetting array elements currently does not cause aliases to be removed from the alias list. If BASH_ALIASES is unset, it loses its special properties, even if it is subsequently reset. BASH_ARGC An array variable whose values are the number of parameters in each frame of the current bash execution call stack. The number of parameters to the current subroutine (shell function or script executed with . or source) is at the top of the stack. When a subroutine is executed, the number of parameters passed is pushed onto BASH_ARGC. The shell sets BASH_ARGC only when in extended debugging mode (see the description of the extdebug option to the shopt builtin below). Setting extdebug after the shell has started to execute a script, or referencing this variable when extdebug is not set, may result in inconsistent values. BASH_ARGV An array variable containing all of the parameters in the current bash execution call stack. The final parameter of the last subroutine call is at the top of the stack; the first parameter of the initial call is at the bottom. When a subroutine is executed, the parameters supplied are pushed onto BASH_ARGV. The shell sets BASH_ARGV only when in extended debugging mode (see the description of the extdebug option to the shopt builtin below). Setting extdebug after the shell has started to execute a script, or referencing this variable when extdebug is not set, may result in inconsistent values. BASH_ARGV0 When referenced, this variable expands to the name of the shell or shell script (identical to $0; see the description of special parameter 0 above). Assignment to BASH_ARGV0 causes the value assigned to also be assigned to $0. If BASH_ARGV0 is unset, it loses its special properties, even if it is subsequently reset. BASH_CMDS An associative array variable whose members correspond to the internal hash table of commands as maintained by the hash builtin. Elements added to this array appear in the hash table; however, unsetting array elements currently does not cause command names to be removed from the hash table. If BASH_CMDS is unset, it loses its special properties, even if it is subsequently reset. BASH_COMMAND The command currently being executed or about to be executed, unless the shell is executing a command as the result of a trap, in which case it is the command executing at the time of the trap. If BASH_COMMAND is unset, it loses its special properties, even if it is subsequently reset. BASH_EXECUTION_STRING The command argument to the -c invocation option. BASH_LINENO An array variable whose members are the line numbers in source files where each corresponding member of FUNCNAME was invoked. ${BASH_LINENO[$i]} is the line number in the source file (${BASH_SOURCE[$i+1]}) where ${FUNCNAME[$i]} was called (or ${BASH_LINENO[$i-1]} if referenced within another shell function). Use LINENO to obtain the current line number. BASH_LOADABLES_PATH A colon-separated list of directories in which the shell looks for dynamically loadable builtins specified by the enable command. BASH_REMATCH An array variable whose members are assigned by the =~ binary operator to the [[ conditional command. The element with index 0 is the portion of the string matching the entire regular expression. The element with index n is the portion of the string matching the nth parenthesized subexpression. BASH_SOURCE An array variable whose members are the source filenames where the corresponding shell function names in the FUNCNAME array variable are defined. The shell function ${FUNCNAME[$i]} is defined in the file ${BASH_SOURCE[$i]} and called from ${BASH_SOURCE[$i+1]}. BASH_SUBSHELL Incremented by one within each subshell or subshell environment when the shell begins executing in that environment. The initial value is 0. If BASH_SUBSHELL is unset, it loses its special properties, even if it is subsequently reset. BASH_VERSINFO A readonly array variable whose members hold version information for this instance of bash. The values assigned to the array members are as follows: BASH_VERSINFO[0] The major version number (the release). BASH_VERSINFO[1] The minor version number (the version). BASH_VERSINFO[2] The patch level. BASH_VERSINFO[3] The build version. BASH_VERSINFO[4] The release status (e.g., beta1). BASH_VERSINFO[5] The value of MACHTYPE. BASH_VERSION Expands to a string describing the version of this instance of bash. COMP_CWORD An index into ${COMP_WORDS} of the word containing the current cursor position. This variable is available only in shell functions invoked by the programmable completion facilities (see Programmable Completion below). COMP_KEY The key (or final key of a key sequence) used to invoke the current completion function. COMP_LINE The current command line. This variable is available only in shell functions and external commands invoked by the programmable completion facilities (see Programmable Completion below). COMP_POINT The index of the current cursor position relative to the beginning of the current command. If the current cursor position is at the end of the current command, the value of this variable is equal to ${#COMP_LINE}. This variable is available only in shell functions and external commands invoked by the programmable completion facilities (see Programmable Completion below). COMP_TYPE Set to an integer value corresponding to the type of completion attempted that caused a completion function to be called: TAB, for normal completion, ?, for listing completions after successive tabs, !, for listing alternatives on partial word completion, @, to list completions if the word is not unmodified, or %, for menu completion. This variable is available only in shell functions and external commands invoked by the programmable completion facilities (see Programmable Completion below). COMP_WORDBREAKS The set of characters that the readline library treats as word separators when performing word completion. If COMP_WORDBREAKS is unset, it loses its special properties, even if it is subsequently reset. COMP_WORDS An array variable (see Arrays below) consisting of the individual words in the current command line. The line is split into words as readline would split it, using COMP_WORDBREAKS as described above. This variable is available only in shell functions invoked by the programmable completion facilities (see Programmable Completion below). COPROC An array variable (see Arrays below) created to hold the file descriptors for output from and input to an unnamed coprocess (see Coprocesses above). DIRSTACK An array variable (see Arrays below) containing the current contents of the directory stack. Directories appear in the stack in the order they are displayed by the dirs builtin. Assigning to members of this array variable may be used to modify directories already in the stack, but the pushd and popd builtins must be used to add and remove directories. Assignment to this variable will not change the current directory. If DIRSTACK is unset, it loses its special properties, even if it is subsequently reset. EPOCHREALTIME Each time this parameter is referenced, it expands to the number of seconds since the Unix Epoch (see time(3)) as a floating point value with micro-second granularity. Assignments to EPOCHREALTIME are ignored. If EPOCHREALTIME is unset, it loses its special properties, even if it is subsequently reset. EPOCHSECONDS Each time this parameter is referenced, it expands to the number of seconds since the Unix Epoch (see time(3)). Assignments to EPOCHSECONDS are ignored. If EPOCHSECONDS is unset, it loses its special properties, even if it is subsequently reset. EUID Expands to the effective user ID of the current user, initialized at shell startup. This variable is readonly. FUNCNAME An array variable containing the names of all shell functions currently in the execution call stack. The element with index 0 is the name of any currently- executing shell function. The bottom-most element (the one with the highest index) is "main". This variable exists only when a shell function is executing. Assignments to FUNCNAME have no effect. If FUNCNAME is unset, it loses its special properties, even if it is subsequently reset. This variable can be used with BASH_LINENO and BASH_SOURCE. Each element of FUNCNAME has corresponding elements in BASH_LINENO and BASH_SOURCE to describe the call stack. For instance, ${FUNCNAME[$i]} was called from the file ${BASH_SOURCE[$i+1]} at line number ${BASH_LINENO[$i]}. The caller builtin displays the current call stack using this information. GROUPS An array variable containing the list of groups of which the current user is a member. Assignments to GROUPS have no effect. If GROUPS is unset, it loses its special properties, even if it is subsequently reset. HISTCMD The history number, or index in the history list, of the current command. Assignments to HISTCMD are ignored. If HISTCMD is unset, it loses its special properties, even if it is subsequently reset. HOSTNAME Automatically set to the name of the current host. HOSTTYPE Automatically set to a string that uniquely describes the type of machine on which bash is executing. The default is system-dependent. LINENO Each time this parameter is referenced, the shell substitutes a decimal number representing the current sequential line number (starting with 1) within a script or function. When not in a script or function, the value substituted is not guaranteed to be meaningful. If LINENO is unset, it loses its special properties, even if it is subsequently reset. MACHTYPE Automatically set to a string that fully describes the system type on which bash is executing, in the standard GNU cpu-company-system format. The default is system- dependent. MAPFILE An array variable (see Arrays below) created to hold the text read by the mapfile builtin when no variable name is supplied. OLDPWD The previous working directory as set by the cd command. OPTARG The value of the last option argument processed by the getopts builtin command (see SHELL BUILTIN COMMANDS below). OPTIND The index of the next argument to be processed by the getopts builtin command (see SHELL BUILTIN COMMANDS below). OSTYPE Automatically set to a string that describes the operating system on which bash is executing. The default is system- dependent. PIPESTATUS An array variable (see Arrays below) containing a list of exit status values from the processes in the most- recently-executed foreground pipeline (which may contain only a single command). PPID The process ID of the shell's parent. This variable is readonly. PWD The current working directory as set by the cd command. RANDOM Each time this parameter is referenced, it expands to a random integer between 0 and 32767. Assigning a value to RANDOM initializes (seeds) the sequence of random numbers. If RANDOM is unset, it loses its special properties, even if it is subsequently reset. READLINE_ARGUMENT Any numeric argument given to a readline command that was defined using "bind -x" (see SHELL BUILTIN COMMANDS below) when it was invoked. READLINE_LINE The contents of the readline line buffer, for use with "bind -x" (see SHELL BUILTIN COMMANDS below). READLINE_MARK The position of the mark (saved insertion point) in the readline line buffer, for use with "bind -x" (see SHELL BUILTIN COMMANDS below). The characters between the insertion point and the mark are often called the region. READLINE_POINT The position of the insertion point in the readline line buffer, for use with "bind -x" (see SHELL BUILTIN COMMANDS below). REPLY Set to the line of input read by the read builtin command when no arguments are supplied. SECONDS Each time this parameter is referenced, it expands to the number of seconds since shell invocation. If a value is assigned to SECONDS, the value returned upon subsequent references is the number of seconds since the assignment plus the value assigned. The number of seconds at shell invocation and the current time are always determined by querying the system clock. If SECONDS is unset, it loses its special properties, even if it is subsequently reset. SHELLOPTS A colon-separated list of enabled shell options. Each word in the list is a valid argument for the -o option to the set builtin command (see SHELL BUILTIN COMMANDS below). The options appearing in SHELLOPTS are those reported as on by set -o. If this variable is in the environment when bash starts up, each shell option in the list will be enabled before reading any startup files. This variable is read-only. SHLVL Incremented by one each time an instance of bash is started. SRANDOM This variable expands to a 32-bit pseudo-random number each time it is referenced. The random number generator is not linear on systems that support /dev/urandom or arc4random, so each returned number has no relationship to the numbers preceding it. The random number generator cannot be seeded, so assignments to this variable have no effect. If SRANDOM is unset, it loses its special properties, even if it is subsequently reset. UID Expands to the user ID of the current user, initialized at shell startup. This variable is readonly. The following variables are used by the shell. In some cases, bash assigns a default value to a variable; these cases are noted below. BASH_COMPAT The value is used to set the shell's compatibility level. See SHELL COMPATIBILITY MODE below for a description of the various compatibility levels and their effects. The value may be a decimal number (e.g., 4.2) or an integer (e.g., 42) corresponding to the desired compatibility level. If BASH_COMPAT is unset or set to the empty string, the compatibility level is set to the default for the current version. If BASH_COMPAT is set to a value that is not one of the valid compatibility levels, the shell prints an error message and sets the compatibility level to the default for the current version. The valid values correspond to the compatibility levels described below under SHELL COMPATIBILITY MODE. For example, 4.2 and 42 are valid values that correspond to the compat42 shopt option and set the compatibility level to 42. The current version is also a valid value. BASH_ENV If this parameter is set when bash is executing a shell script, its value is interpreted as a filename containing commands to initialize the shell, as in ~/.bashrc. The value of BASH_ENV is subjected to parameter expansion, command substitution, and arithmetic expansion before being interpreted as a filename. PATH is not used to search for the resultant filename. BASH_XTRACEFD If set to an integer corresponding to a valid file descriptor, bash will write the trace output generated when set -x is enabled to that file descriptor. The file descriptor is closed when BASH_XTRACEFD is unset or assigned a new value. Unsetting BASH_XTRACEFD or assigning it the empty string causes the trace output to be sent to the standard error. Note that setting BASH_XTRACEFD to 2 (the standard error file descriptor) and then unsetting it will result in the standard error being closed. CDPATH The search path for the cd command. This is a colon- separated list of directories in which the shell looks for destination directories specified by the cd command. A sample value is ".:~:/usr". CHILD_MAX Set the number of exited child status values for the shell to remember. Bash will not allow this value to be decreased below a POSIX-mandated minimum, and there is a maximum value (currently 8192) that this may not exceed. The minimum value is system-dependent. COLUMNS Used by the select compound command to determine the terminal width when printing selection lists. Automatically set if the checkwinsize option is enabled or in an interactive shell upon receipt of a SIGWINCH. COMPREPLY An array variable from which bash reads the possible completions generated by a shell function invoked by the programmable completion facility (see Programmable Completion below). Each array element contains one possible completion. EMACS If bash finds this variable in the environment when the shell starts with value "t", it assumes that the shell is running in an Emacs shell buffer and disables line editing. ENV Expanded and executed similarly to BASH_ENV (see INVOCATION above) when an interactive shell is invoked in posix mode. EXECIGNORE A colon-separated list of shell patterns (see Pattern Matching) defining the list of filenames to be ignored by command search using PATH. Files whose full pathnames match one of these patterns are not considered executable files for the purposes of completion and command execution via PATH lookup. This does not affect the behavior of the [, test, and [[ commands. Full pathnames in the command hash table are not subject to EXECIGNORE. Use this variable to ignore shared library files that have the executable bit set, but are not executable files. The pattern matching honors the setting of the extglob shell option. FCEDIT The default editor for the fc builtin command. FIGNORE A colon-separated list of suffixes to ignore when performing filename completion (see READLINE below). A filename whose suffix matches one of the entries in FIGNORE is excluded from the list of matched filenames. A sample value is ".o:~". FUNCNEST If set to a numeric value greater than 0, defines a maximum function nesting level. Function invocations that exceed this nesting level will cause the current command to abort. GLOBIGNORE A colon-separated list of patterns defining the set of file names to be ignored by pathname expansion. If a file name matched by a pathname expansion pattern also matches one of the patterns in GLOBIGNORE, it is removed from the list of matches. HISTCONTROL A colon-separated list of values controlling how commands are saved on the history list. If the list of values includes ignorespace, lines which begin with a space character are not saved in the history list. A value of ignoredups causes lines matching the previous history entry to not be saved. A value of ignoreboth is shorthand for ignorespace and ignoredups. A value of erasedups causes all previous lines matching the current line to be removed from the history list before that line is saved. Any value not in the above list is ignored. If HISTCONTROL is unset, or does not include a valid value, all lines read by the shell parser are saved on the history list, subject to the value of HISTIGNORE. The second and subsequent lines of a multi-line compound command are not tested, and are added to the history regardless of the value of HISTCONTROL. HISTFILE The name of the file in which command history is saved (see HISTORY below). The default value is ~/.bash_history. If unset, the command history is not saved when a shell exits. HISTFILESIZE The maximum number of lines contained in the history file. When this variable is assigned a value, the history file is truncated, if necessary, to contain no more than that number of lines by removing the oldest entries. The history file is also truncated to this size after writing it when a shell exits. If the value is 0, the history file is truncated to zero size. Non-numeric values and numeric values less than zero inhibit truncation. The shell sets the default value to the value of HISTSIZE after reading any startup files. HISTIGNORE A colon-separated list of patterns used to decide which command lines should be saved on the history list. Each pattern is anchored at the beginning of the line and must match the complete line (no implicit `*' is appended). Each pattern is tested against the line after the checks specified by HISTCONTROL are applied. In addition to the normal shell pattern matching characters, `&' matches the previous history line. `&' may be escaped using a backslash; the backslash is removed before attempting a match. The second and subsequent lines of a multi-line compound command are not tested, and are added to the history regardless of the value of HISTIGNORE. The pattern matching honors the setting of the extglob shell option. HISTSIZE The number of commands to remember in the command history (see HISTORY below). If the value is 0, commands are not saved in the history list. Numeric values less than zero result in every command being saved on the history list (there is no limit). The shell sets the default value to 500 after reading any startup files. HISTTIMEFORMAT If this variable is set and not null, its value is used as a format string for strftime(3) to print the time stamp associated with each history entry displayed by the history builtin. If this variable is set, time stamps are written to the history file so they may be preserved across shell sessions. This uses the history comment character to distinguish timestamps from other history lines. HOME The home directory of the current user; the default argument for the cd builtin command. The value of this variable is also used when performing tilde expansion. HOSTFILE Contains the name of a file in the same format as /etc/hosts that should be read when the shell needs to complete a hostname. The list of possible hostname completions may be changed while the shell is running; the next time hostname completion is attempted after the value is changed, bash adds the contents of the new file to the existing list. If HOSTFILE is set, but has no value, or does not name a readable file, bash attempts to read /etc/hosts to obtain the list of possible hostname completions. When HOSTFILE is unset, the hostname list is cleared. IFS The Internal Field Separator that is used for word splitting after expansion and to split lines into words with the read builtin command. The default value is ``<space><tab><newline>''. IGNOREEOF Controls the action of an interactive shell on receipt of an EOF character as the sole input. If set, the value is the number of consecutive EOF characters which must be typed as the first characters on an input line before bash exits. If the variable exists but does not have a numeric value, or has no value, the default value is 10. If it does not exist, EOF signifies the end of input to the shell. INPUTRC The filename for the readline startup file, overriding the default of ~/.inputrc (see READLINE below). INSIDE_EMACS If this variable appears in the environment when the shell starts, bash assumes that it is running inside an Emacs shell buffer and may disable line editing, depending on the value of TERM. LANG Used to determine the locale category for any category not specifically selected with a variable starting with LC_. LC_ALL This variable overrides the value of LANG and any other LC_ variable specifying a locale category. LC_COLLATE This variable determines the collation order used when sorting the results of pathname expansion, and determines the behavior of range expressions, equivalence classes, and collating sequences within pathname expansion and pattern matching. LC_CTYPE This variable determines the interpretation of characters and the behavior of character classes within pathname expansion and pattern matching. LC_MESSAGES This variable determines the locale used to translate double-quoted strings preceded by a $. LC_NUMERIC This variable determines the locale category used for number formatting. LC_TIME This variable determines the locale category used for data and time formatting. LINES Used by the select compound command to determine the column length for printing selection lists. Automatically set if the checkwinsize option is enabled or in an interactive shell upon receipt of a SIGWINCH. MAIL If this parameter is set to a file or directory name and the MAILPATH variable is not set, bash informs the user of the arrival of mail in the specified file or Maildir- format directory. MAILCHECK Specifies how often (in seconds) bash checks for mail. The default is 60 seconds. When it is time to check for mail, the shell does so before displaying the primary prompt. If this variable is unset, or set to a value that is not a number greater than or equal to zero, the shell disables mail checking. MAILPATH A colon-separated list of filenames to be checked for mail. The message to be printed when mail arrives in a particular file may be specified by separating the filename from the message with a `?'. When used in the text of the message, $_ expands to the name of the current mailfile. Example: MAILPATH='/var/mail/bfox?"You have mail":~/shell-mail?"$_ has mail!"' Bash can be configured to supply a default value for this variable (there is no value by default), but the location of the user mail files that it uses is system dependent (e.g., /var/mail/$USER). OPTERR If set to the value 1, bash displays error messages generated by the getopts builtin command (see SHELL BUILTIN COMMANDS below). OPTERR is initialized to 1 each time the shell is invoked or a shell script is executed. PATH The search path for commands. It is a colon-separated list of directories in which the shell looks for commands (see COMMAND EXECUTION below). A zero-length (null) directory name in the value of PATH indicates the current directory. A null directory name may appear as two adjacent colons, or as an initial or trailing colon. The default path is system-dependent, and is set by the administrator who installs bash. A common value is ``/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/bin:/sbin''. POSIXLY_CORRECT If this variable is in the environment when bash starts, the shell enters posix mode before reading the startup files, as if the --posix invocation option had been supplied. If it is set while the shell is running, bash enables posix mode, as if the command set -o posix had been executed. When the shell enters posix mode, it sets this variable if it was not already set. PROMPT_COMMAND If this variable is set, and is an array, the value of each set element is executed as a command prior to issuing each primary prompt. If this is set but not an array variable, its value is used as a command to execute instead. PROMPT_DIRTRIM If set to a number greater than zero, the value is used as the number of trailing directory components to retain when expanding the \w and \W prompt string escapes (see PROMPTING below). Characters removed are replaced with an ellipsis. PS0 The value of this parameter is expanded (see PROMPTING below) and displayed by interactive shells after reading a command and before the command is executed. PS1 The value of this parameter is expanded (see PROMPTING below) and used as the primary prompt string. The default value is ``\s-\v\$ ''. PS2 The value of this parameter is expanded as with PS1 and used as the secondary prompt string. The default is ``> ''. PS3 The value of this parameter is used as the prompt for the select command (see SHELL GRAMMAR above). PS4 The value of this parameter is expanded as with PS1 and the value is printed before each command bash displays during an execution trace. The first character of the expanded value of PS4 is replicated multiple times, as necessary, to indicate multiple levels of indirection. The default is ``+ ''. SHELL This variable expands to the full pathname to the shell. If it is not set when the shell starts, bash assigns to it the full pathname of the current user's login shell. TIMEFORMAT The value of this parameter is used as a format string specifying how the timing information for pipelines prefixed with the time reserved word should be displayed. The % character introduces an escape sequence that is expanded to a time value or other information. The escape sequences and their meanings are as follows; the braces denote optional portions. %% A literal %. %[p][l]R The elapsed time in seconds. %[p][l]U The number of CPU seconds spent in user mode. %[p][l]S The number of CPU seconds spent in system mode. %P The CPU percentage, computed as (%U + %S) / %R. The optional p is a digit specifying the precision, the number of fractional digits after a decimal point. A value of 0 causes no decimal point or fraction to be output. At most three places after the decimal point may be specified; values of p greater than 3 are changed to 3. If p is not specified, the value 3 is used. The optional l specifies a longer format, including minutes, of the form MMmSS.FFs. The value of p determines whether or not the fraction is included. If this variable is not set, bash acts as if it had the value $'\nreal\t%3lR\nuser\t%3lU\nsys\t%3lS'. If the value is null, no timing information is displayed. A trailing newline is added when the format string is displayed. TMOUT If set to a value greater than zero, TMOUT is treated as the default timeout for the read builtin. The select command terminates if input does not arrive after TMOUT seconds when input is coming from a terminal. In an interactive shell, the value is interpreted as the number of seconds to wait for a line of input after issuing the primary prompt. Bash terminates after waiting for that number of seconds if a complete line of input does not arrive. TMPDIR If set, bash uses its value as the name of a directory in which bash creates temporary files for the shell's use. auto_resume This variable controls how the shell interacts with the user and job control. If this variable is set, single word simple commands without redirections are treated as candidates for resumption of an existing stopped job. There is no ambiguity allowed; if there is more than one job beginning with the string typed, the job most recently accessed is selected. The name of a stopped job, in this context, is the command line used to start it. If set to the value exact, the string supplied must match the name of a stopped job exactly; if set to substring, the string supplied needs to match a substring of the name of a stopped job. The substring value provides functionality analogous to the %? job identifier (see JOB CONTROL below). If set to any other value, the supplied string must be a prefix of a stopped job's name; this provides functionality analogous to the %string job identifier. histchars The two or three characters which control history expansion and tokenization (see HISTORY EXPANSION below). The first character is the history expansion character, the character which signals the start of a history expansion, normally `!'. The second character is the quick substitution character, which is used as shorthand for re-running the previous command entered, substituting one string for another in the command. The default is `^'. The optional third character is the character which indicates that the remainder of the line is a comment when found as the first character of a word, normally `#'. The history comment character causes history substitution to be skipped for the remaining words on the line. It does not necessarily cause the shell parser to treat the rest of the line as a comment. Arrays Bash provides one-dimensional indexed and associative array variables. Any variable may be used as an indexed array; the declare builtin will explicitly declare an array. There is no maximum limit on the size of an array, nor any requirement that members be indexed or assigned contiguously. Indexed arrays are referenced using integers (including arithmetic expressions) and are zero-based; associative arrays are referenced using arbitrary strings. Unless otherwise noted, indexed array indices must be non-negative integers. An indexed array is created automatically if any variable is assigned to using the syntax name[subscript]=value. The subscript is treated as an arithmetic expression that must evaluate to a number. To explicitly declare an indexed array, use declare -a name (see SHELL BUILTIN COMMANDS below). declare -a name[subscript] is also accepted; the subscript is ignored. Associative arrays are created using declare -A name. Attributes may be specified for an array variable using the declare and readonly builtins. Each attribute applies to all members of an array. Arrays are assigned to using compound assignments of the form name=(value1 ... valuen), where each value may be of the form [subscript]=string. Indexed array assignments do not require anything but string. Each value in the list is expanded using all the shell expansions described below under EXPANSION. When assigning to indexed arrays, if the optional brackets and subscript are supplied, that index is assigned to; otherwise the index of the element assigned is the last index assigned to by the statement plus one. Indexing starts at zero. When assigning to an associative array, the words in a compound assignment may be either assignment statements, for which the subscript is required, or a list of words that is interpreted as a sequence of alternating keys and values: name=( key1 value1 key2 value2 ...). These are treated identically to name=( [key1]=value1 [key2]=value2 ...). The first word in the list determines how the remaining words are interpreted; all assignments in a list must be of the same type. When using key/value pairs, the keys may not be missing or empty; a final missing value is treated like the empty string. This syntax is also accepted by the declare builtin. Individual array elements may be assigned to using the name[subscript]=value syntax introduced above. When assigning to an indexed array, if name is subscripted by a negative number, that number is interpreted as relative to one greater than the maximum index of name, so negative indices count back from the end of the array, and an index of -1 references the last element. The += operator will append to an array variable when assigning using the compound assignment syntax; see PARAMETERS above. Any element of an array may be referenced using ${name[subscript]}. The braces are required to avoid conflicts with pathname expansion. If subscript is @ or *, the word expands to all members of name. These subscripts differ only when the word appears within double quotes. If the word is double-quoted, ${name[*]} expands to a single word with the value of each array member separated by the first character of the IFS special variable, and ${name[@]} expands each element of name to a separate word. When there are no array members, ${name[@]} expands to nothing. If the double-quoted expansion occurs within a word, the expansion of the first parameter is joined with the beginning part of the original word, and the expansion of the last parameter is joined with the last part of the original word. This is analogous to the expansion of the special parameters * and @ (see Special Parameters above). ${#name[subscript]} expands to the length of ${name[subscript]}. If subscript is * or @, the expansion is the number of elements in the array. If the subscript used to reference an element of an indexed array evaluates to a number less than zero, it is interpreted as relative to one greater than the maximum index of the array, so negative indices count back from the end of the array, and an index of -1 references the last element. Referencing an array variable without a subscript is equivalent to referencing the array with a subscript of 0. Any reference to a variable using a valid subscript is legal, and bash will create an array if necessary. An array variable is considered set if a subscript has been assigned a value. The null string is a valid value. It is possible to obtain the keys (indices) of an array as well as the values. ${!name[@]} and ${!name[*]} expand to the indices assigned in array variable name. The treatment when in double quotes is similar to the expansion of the special parameters @ and * within double quotes. The unset builtin is used to destroy arrays. unset name[subscript] destroys the array element at index subscript, for both indexed and associative arrays. Negative subscripts to indexed arrays are interpreted as described above. Unsetting the last element of an array variable does not unset the variable. unset name, where name is an array, removes the entire array. unset name[subscript], where subscript is * or @, behaves differently depending on whether name is an indexed or associative array. If name is an associative array, this unsets the element with subscript * or @. If name is an indexed array, unset removes all of the elements but does not remove the array itself. When using a variable name with a subscript as an argument to a command, such as with unset, without using the word expansion syntax described above, the argument is subject to pathname expansion. If pathname expansion is not desired, the argument should be quoted. The declare, local, and readonly builtins each accept a -a option to specify an indexed array and a -A option to specify an associative array. If both options are supplied, -A takes precedence. The read builtin accepts a -a option to assign a list of words read from the standard input to an array. The set and declare builtins display array values in a way that allows them to be reused as assignments. EXPANSION top Expansion is performed on the command line after it has been split into words. There are seven kinds of expansion performed: brace expansion, tilde expansion, parameter and variable expansion, command substitution, arithmetic expansion, word splitting, and pathname expansion. The order of expansions is: brace expansion; tilde expansion, parameter and variable expansion, arithmetic expansion, and command substitution (done in a left-to-right fashion); word splitting; and pathname expansion. On systems that can support it, there is an additional expansion available: process substitution. This is performed at the same time as tilde, parameter, variable, and arithmetic expansion and command substitution. After these expansions are performed, quote characters present in the original word are removed unless they have been quoted themselves (quote removal). Only brace expansion, word splitting, and pathname expansion can increase the number of words of the expansion; other expansions expand a single word to a single word. The only exceptions to this are the expansions of "$@" and "${name[@]}", and, in most cases, $* and ${name[*]} as explained above (see PARAMETERS). Brace Expansion Brace expansion is a mechanism by which arbitrary strings may be generated. This mechanism is similar to pathname expansion, but the filenames generated need not exist. Patterns to be brace expanded take the form of an optional preamble, followed by either a series of comma-separated strings or a sequence expression between a pair of braces, followed by an optional postscript. The preamble is prefixed to each string contained within the braces, and the postscript is then appended to each resulting string, expanding left to right. Brace expansions may be nested. The results of each expanded string are not sorted; left to right order is preserved. For example, a{d,c,b}e expands into `ade ace abe'. A sequence expression takes the form {x..y[..incr]}, where x and y are either integers or single letters, and incr, an optional increment, is an integer. When integers are supplied, the expression expands to each number between x and y, inclusive. Supplied integers may be prefixed with 0 to force each term to have the same width. When either x or y begins with a zero, the shell attempts to force all generated terms to contain the same number of digits, zero-padding where necessary. When letters are supplied, the expression expands to each character lexicographically between x and y, inclusive, using the default C locale. Note that both x and y must be of the same type (integer or letter). When the increment is supplied, it is used as the difference between each term. The default increment is 1 or -1 as appropriate. Brace expansion is performed before any other expansions, and any characters special to other expansions are preserved in the result. It is strictly textual. Bash does not apply any syntactic interpretation to the context of the expansion or the text between the braces. A correctly-formed brace expansion must contain unquoted opening and closing braces, and at least one unquoted comma or a valid sequence expression. Any incorrectly formed brace expansion is left unchanged. A { or , may be quoted with a backslash to prevent its being considered part of a brace expression. To avoid conflicts with parameter expansion, the string ${ is not considered eligible for brace expansion, and inhibits brace expansion until the closing }. This construct is typically used as shorthand when the common prefix of the strings to be generated is longer than in the above example: mkdir /usr/local/src/bash/{old,new,dist,bugs} or chown root /usr/{ucb/{ex,edit},lib/{ex?.?*,how_ex}} Brace expansion introduces a slight incompatibility with historical versions of sh. sh does not treat opening or closing braces specially when they appear as part of a word, and preserves them in the output. Bash removes braces from words as a consequence of brace expansion. For example, a word entered to sh as file{1,2} appears identically in the output. The same word is output as file1 file2 after expansion by bash. If strict compatibility with sh is desired, start bash with the +B option or disable brace expansion with the +B option to the set command (see SHELL BUILTIN COMMANDS below). Tilde Expansion If a word begins with an unquoted tilde character (`~'), all of the characters preceding the first unquoted slash (or all characters, if there is no unquoted slash) are considered a tilde-prefix. If none of the characters in the tilde-prefix are quoted, the characters in the tilde-prefix following the tilde are treated as a possible login name. If this login name is the null string, the tilde is replaced with the value of the shell parameter HOME. If HOME is unset, the home directory of the user executing the shell is substituted instead. Otherwise, the tilde-prefix is replaced with the home directory associated with the specified login name. If the tilde-prefix is a `~+', the value of the shell variable PWD replaces the tilde-prefix. If the tilde-prefix is a `~-', the value of the shell variable OLDPWD, if it is set, is substituted. If the characters following the tilde in the tilde- prefix consist of a number N, optionally prefixed by a `+' or a `-', the tilde-prefix is replaced with the corresponding element from the directory stack, as it would be displayed by the dirs builtin invoked with the tilde-prefix as an argument. If the characters following the tilde in the tilde-prefix consist of a number without a leading `+' or `-', `+' is assumed. If the login name is invalid, or the tilde expansion fails, the word is unchanged. Each variable assignment is checked for unquoted tilde-prefixes immediately following a : or the first =. In these cases, tilde expansion is also performed. Consequently, one may use filenames with tildes in assignments to PATH, MAILPATH, and CDPATH, and the shell assigns the expanded value. Bash also performs tilde expansion on words satisfying the conditions of variable assignments (as described above under PARAMETERS) when they appear as arguments to simple commands. Bash does not do this, except for the declaration commands listed above, when in posix mode. Parameter Expansion The `$' character introduces parameter expansion, command substitution, or arithmetic expansion. The parameter name or symbol to be expanded may be enclosed in braces, which are optional but serve to protect the variable to be expanded from characters immediately following it which could be interpreted as part of the name. When braces are used, the matching ending brace is the first `}' not escaped by a backslash or within a quoted string, and not within an embedded arithmetic expansion, command substitution, or parameter expansion. ${parameter} The value of parameter is substituted. The braces are required when parameter is a positional parameter with more than one digit, or when parameter is followed by a character which is not to be interpreted as part of its name. The parameter is a shell parameter as described above PARAMETERS) or an array reference (Arrays). If the first character of parameter is an exclamation point (!), and parameter is not a nameref, it introduces a level of indirection. Bash uses the value formed by expanding the rest of parameter as the new parameter; this is then expanded and that value is used in the rest of the expansion, rather than the expansion of the original parameter. This is known as indirect expansion. The value is subject to tilde expansion, parameter expansion, command substitution, and arithmetic expansion. If parameter is a nameref, this expands to the name of the parameter referenced by parameter instead of performing the complete indirect expansion. The exceptions to this are the expansions of ${!prefix*} and ${!name[@]} described below. The exclamation point must immediately follow the left brace in order to introduce indirection. In each of the cases below, word is subject to tilde expansion, parameter expansion, command substitution, and arithmetic expansion. When not performing substring expansion, using the forms documented below (e.g., :-), bash tests for a parameter that is unset or null. Omitting the colon results in a test only for a parameter that is unset. ${parameter:-word} Use Default Values. If parameter is unset or null, the expansion of word is substituted. Otherwise, the value of parameter is substituted. ${parameter:=word} Assign Default Values. If parameter is unset or null, the expansion of word is assigned to parameter. The value of parameter is then substituted. Positional parameters and special parameters may not be assigned to in this way. ${parameter:?word} Display Error if Null or Unset. If parameter is null or unset, the expansion of word (or a message to that effect if word is not present) is written to the standard error and the shell, if it is not interactive, exits. Otherwise, the value of parameter is substituted. ${parameter:+word} Use Alternate Value. If parameter is null or unset, nothing is substituted, otherwise the expansion of word is substituted. ${parameter:offset} ${parameter:offset:length} Substring Expansion. Expands to up to length characters of the value of parameter starting at the character specified by offset. If parameter is @ or *, an indexed array subscripted by @ or *, or an associative array name, the results differ as described below. If length is omitted, expands to the substring of the value of parameter starting at the character specified by offset and extending to the end of the value. length and offset are arithmetic expressions (see ARITHMETIC EVALUATION below). If offset evaluates to a number less than zero, the value is used as an offset in characters from the end of the value of parameter. If length evaluates to a number less than zero, it is interpreted as an offset in characters from the end of the value of parameter rather than a number of characters, and the expansion is the characters between offset and that result. Note that a negative offset must be separated from the colon by at least one space to avoid being confused with the :- expansion. If parameter is @ or *, the result is length positional parameters beginning at offset. A negative offset is taken relative to one greater than the greatest positional parameter, so an offset of -1 evaluates to the last positional parameter. It is an expansion error if length evaluates to a number less than zero. If parameter is an indexed array name subscripted by @ or *, the result is the length members of the array beginning with ${parameter[offset]}. A negative offset is taken relative to one greater than the maximum index of the specified array. It is an expansion error if length evaluates to a number less than zero. Substring expansion applied to an associative array produces undefined results. Substring indexing is zero-based unless the positional parameters are used, in which case the indexing starts at 1 by default. If offset is 0, and the positional parameters are used, $0 is prefixed to the list. ${!prefix*} ${!prefix@} Names matching prefix. Expands to the names of variables whose names begin with prefix, separated by the first character of the IFS special variable. When @ is used and the expansion appears within double quotes, each variable name expands to a separate word. ${!name[@]} ${!name[*]} List of array keys. If name is an array variable, expands to the list of array indices (keys) assigned in name. If name is not an array, expands to 0 if name is set and null otherwise. When @ is used and the expansion appears within double quotes, each key expands to a separate word. ${#parameter} Parameter length. The length in characters of the value of parameter is substituted. If parameter is * or @, the value substituted is the number of positional parameters. If parameter is an array name subscripted by * or @, the value substituted is the number of elements in the array. If parameter is an indexed array name subscripted by a negative number, that number is interpreted as relative to one greater than the maximum index of parameter, so negative indices count back from the end of the array, and an index of -1 references the last element. ${parameter#word} ${parameter##word} Remove matching prefix pattern. The word is expanded to produce a pattern just as in pathname expansion, and matched against the expanded value of parameter using the rules described under Pattern Matching below. If the pattern matches the beginning of the value of parameter, then the result of the expansion is the expanded value of parameter with the shortest matching pattern (the ``#'' case) or the longest matching pattern (the ``##'' case) deleted. If parameter is @ or *, the pattern removal operation is applied to each positional parameter in turn, and the expansion is the resultant list. If parameter is an array variable subscripted with @ or *, the pattern removal operation is applied to each member of the array in turn, and the expansion is the resultant list. ${parameter%word} ${parameter%%word} Remove matching suffix pattern. The word is expanded to produce a pattern just as in pathname expansion, and matched against the expanded value of parameter using the rules described under Pattern Matching below. If the pattern matches a trailing portion of the expanded value of parameter, then the result of the expansion is the expanded value of parameter with the shortest matching pattern (the ``%'' case) or the longest matching pattern (the ``%%'' case) deleted. If parameter is @ or *, the pattern removal operation is applied to each positional parameter in turn, and the expansion is the resultant list. If parameter is an array variable subscripted with @ or *, the pattern removal operation is applied to each member of the array in turn, and the expansion is the resultant list. ${parameter/pattern/string} ${parameter//pattern/string} ${parameter/#pattern/string} ${parameter/%pattern/string} Pattern substitution. The pattern is expanded to produce a pattern just as in pathname expansion. Parameter is expanded and the longest match of pattern against its value is replaced with string. string undergoes tilde expansion, parameter and variable expansion, arithmetic expansion, command and process substitution, and quote removal. The match is performed using the rules described under Pattern Matching below. In the first form above, only the first match is replaced. If there are two slashes separating parameter and pattern (the second form above), all matches of pattern are replaced with string. If pattern is preceded by # (the third form above), it must match at the beginning of the expanded value of parameter. If pattern is preceded by % (the fourth form above), it must match at the end of the expanded value of parameter. If the expansion of string is null, matches of pattern are deleted. If string is null, matches of pattern are deleted and the / following pattern may be omitted. If the patsub_replacement shell option is enabled using shopt, any unquoted instances of & in string are replaced with the matching portion of pattern. Quoting any part of string inhibits replacement in the expansion of the quoted portion, including replacement strings stored in shell variables. Backslash will escape & in string; the backslash is removed in order to permit a literal & in the replacement string. Backslash can also be used to escape a backslash; \\ results in a literal backslash in the replacement. Users should take care if string is double-quoted to avoid unwanted interactions between the backslash and double-quoting, since backslash has special meaning within double quotes. Pattern substitution performs the check for unquoted & after expanding string; shell programmers should quote any occurrences of & they want to be taken literally in the replacement and ensure any instances of & they want to be replaced are unquoted. If the nocasematch shell option is enabled, the match is performed without regard to the case of alphabetic characters. If parameter is @ or *, the substitution operation is applied to each positional parameter in turn, and the expansion is the resultant list. If parameter is an array variable subscripted with @ or *, the substitution operation is applied to each member of the array in turn, and the expansion is the resultant list. ${parameter^pattern} ${parameter^^pattern} ${parameter,pattern} ${parameter,,pattern} Case modification. This expansion modifies the case of alphabetic characters in parameter. The pattern is expanded to produce a pattern just as in pathname expansion. Each character in the expanded value of parameter is tested against pattern, and, if it matches the pattern, its case is converted. The pattern should not attempt to match more than one character. The ^ operator converts lowercase letters matching pattern to uppercase; the , operator converts matching uppercase letters to lowercase. The ^^ and ,, expansions convert each matched character in the expanded value; the ^ and , expansions match and convert only the first character in the expanded value. If pattern is omitted, it is treated like a ?, which matches every character. If parameter is @ or *, the case modification operation is applied to each positional parameter in turn, and the expansion is the resultant list. If parameter is an array variable subscripted with @ or *, the case modification operation is applied to each member of the array in turn, and the expansion is the resultant list. ${parameter@operator} Parameter transformation. The expansion is either a transformation of the value of parameter or information about parameter itself, depending on the value of operator. Each operator is a single letter: U The expansion is a string that is the value of parameter with lowercase alphabetic characters converted to uppercase. u The expansion is a string that is the value of parameter with the first character converted to uppercase, if it is alphabetic. L The expansion is a string that is the value of parameter with uppercase alphabetic characters converted to lowercase. Q The expansion is a string that is the value of parameter quoted in a format that can be reused as input. E The expansion is a string that is the value of parameter with backslash escape sequences expanded as with the $'...' quoting mechanism. P The expansion is a string that is the result of expanding the value of parameter as if it were a prompt string (see PROMPTING below). A The expansion is a string in the form of an assignment statement or declare command that, if evaluated, will recreate parameter with its attributes and value. K Produces a possibly-quoted version of the value of parameter, except that it prints the values of indexed and associative arrays as a sequence of quoted key-value pairs (see Arrays above). a The expansion is a string consisting of flag values representing parameter's attributes. k Like the K transformation, but expands the keys and values of indexed and associative arrays to separate words after word splitting. If parameter is @ or *, the operation is applied to each positional parameter in turn, and the expansion is the resultant list. If parameter is an array variable subscripted with @ or *, the operation is applied to each member of the array in turn, and the expansion is the resultant list. The result of the expansion is subject to word splitting and pathname expansion as described below. Command Substitution Command substitution allows the output of a command to replace the command name. There are two forms: $(command) or `command` Bash performs the expansion by executing command in a subshell environment and replacing the command substitution with the standard output of the command, with any trailing newlines deleted. Embedded newlines are not deleted, but they may be removed during word splitting. The command substitution $(cat file) can be replaced by the equivalent but faster $(< file). When the old-style backquote form of substitution is used, backslash retains its literal meaning except when followed by $, `, or \. The first backquote not preceded by a backslash terminates the command substitution. When using the $(command) form, all characters between the parentheses make up the command; none are treated specially. Command substitutions may be nested. To nest when using the backquoted form, escape the inner backquotes with backslashes. If the substitution appears within double quotes, word splitting and pathname expansion are not performed on the results. Arithmetic Expansion Arithmetic expansion allows the evaluation of an arithmetic expression and the substitution of the result. The format for arithmetic expansion is: $((expression)) The expression undergoes the same expansions as if it were within double quotes, but double quote characters in expression are not treated specially and are removed. All tokens in the expression undergo parameter and variable expansion, command substitution, and quote removal. The result is treated as the arithmetic expression to be evaluated. Arithmetic expansions may be nested. The evaluation is performed according to the rules listed below under ARITHMETIC EVALUATION. If expression is invalid, bash prints a message indicating failure and no substitution occurs. Process Substitution Process substitution allows a process's input or output to be referred to using a filename. It takes the form of <(list) or >(list). The process list is run asynchronously, and its input or output appears as a filename. This filename is passed as an argument to the current command as the result of the expansion. If the >(list) form is used, writing to the file will provide input for list. If the <(list) form is used, the file passed as an argument should be read to obtain the output of list. Process substitution is supported on systems that support named pipes (FIFOs) or the /dev/fd method of naming open files. When available, process substitution is performed simultaneously with parameter and variable expansion, command substitution, and arithmetic expansion. Word Splitting The shell scans the results of parameter expansion, command substitution, and arithmetic expansion that did not occur within double quotes for word splitting. The shell treats each character of IFS as a delimiter, and splits the results of the other expansions into words using these characters as field terminators. If IFS is unset, or its value is exactly <space><tab><newline>, the default, then sequences of <space>, <tab>, and <newline> at the beginning and end of the results of the previous expansions are ignored, and any sequence of IFS characters not at the beginning or end serves to delimit words. If IFS has a value other than the default, then sequences of the whitespace characters space, tab, and newline are ignored at the beginning and end of the word, as long as the whitespace character is in the value of IFS (an IFS whitespace character). Any character in IFS that is not IFS whitespace, along with any adjacent IFS whitespace characters, delimits a field. A sequence of IFS whitespace characters is also treated as a delimiter. If the value of IFS is null, no word splitting occurs. Explicit null arguments ("" or '') are retained and passed to commands as empty strings. Unquoted implicit null arguments, resulting from the expansion of parameters that have no values, are removed. If a parameter with no value is expanded within double quotes, a null argument results and is retained and passed to a command as an empty string. When a quoted null argument appears as part of a word whose expansion is non-null, the null argument is removed. That is, the word -d'' becomes -d after word splitting and null argument removal. Note that if no expansion occurs, no splitting is performed. Pathname Expansion After word splitting, unless the -f option has been set, bash scans each word for the characters *, ?, and [. If one of these characters appears, and is not quoted, then the word is regarded as a pattern, and replaced with an alphabetically sorted list of filenames matching the pattern (see Pattern Matching below). If no matching filenames are found, and the shell option nullglob is not enabled, the word is left unchanged. If the nullglob option is set, and no matches are found, the word is removed. If the failglob shell option is set, and no matches are found, an error message is printed and the command is not executed. If the shell option nocaseglob is enabled, the match is performed without regard to the case of alphabetic characters. When a pattern is used for pathname expansion, the character ``.'' at the start of a name or immediately following a slash must be matched explicitly, unless the shell option dotglob is set. In order to match the filenames ``.'' and ``..'', the pattern must begin with ``.'' (for example, ``.?''), even if dotglob is set. If the globskipdots shell option is enabled, the filenames ``.'' and ``..'' are never matched, even if the pattern begins with a ``.''. When not matching pathnames, the ``.'' character is not treated specially. When matching a pathname, the slash character must always be matched explicitly by a slash in the pattern, but in other matching contexts it can be matched by a special pattern character as described below under Pattern Matching. See the description of shopt below under SHELL BUILTIN COMMANDS for a description of the nocaseglob, nullglob, globskipdots, failglob, and dotglob shell options. The GLOBIGNORE shell variable may be used to restrict the set of file names matching a pattern. If GLOBIGNORE is set, each matching file name that also matches one of the patterns in GLOBIGNORE is removed from the list of matches. If the nocaseglob option is set, the matching against the patterns in GLOBIGNORE is performed without regard to case. The filenames ``.'' and ``..'' are always ignored when GLOBIGNORE is set and not null. However, setting GLOBIGNORE to a non-null value has the effect of enabling the dotglob shell option, so all other filenames beginning with a ``.'' will match. To get the old behavior of ignoring filenames beginning with a ``.'', make ``.*'' one of the patterns in GLOBIGNORE. The dotglob option is disabled when GLOBIGNORE is unset. The pattern matching honors the setting of the extglob shell option. Pattern Matching Any character that appears in a pattern, other than the special pattern characters described below, matches itself. The NUL character may not occur in a pattern. A backslash escapes the following character; the escaping backslash is discarded when matching. The special pattern characters must be quoted if they are to be matched literally. The special pattern characters have the following meanings: * Matches any string, including the null string. When the globstar shell option is enabled, and * is used in a pathname expansion context, two adjacent *s used as a single pattern will match all files and zero or more directories and subdirectories. If followed by a /, two adjacent *s will match only directories and subdirectories. ? Matches any single character. [...] Matches any one of the enclosed characters. A pair of characters separated by a hyphen denotes a range expression; any character that falls between those two characters, inclusive, using the current locale's collating sequence and character set, is matched. If the first character following the [ is a ! or a ^ then any character not enclosed is matched. The sorting order of characters in range expressions, and the characters included in the range, are determined by the current locale and the values of the LC_COLLATE or LC_ALL shell variables, if set. To obtain the traditional interpretation of range expressions, where [a-d] is equivalent to [abcd], set value of the LC_ALL shell variable to C, or enable the globasciiranges shell option. A - may be matched by including it as the first or last character in the set. A ] may be matched by including it as the first character in the set. Within [ and ], character classes can be specified using the syntax [:class:], where class is one of the following classes defined in the POSIX standard: alnum alpha ascii blank cntrl digit graph lower print punct space upper word xdigit A character class matches any character belonging to that class. The word character class matches letters, digits, and the character _. Within [ and ], an equivalence class can be specified using the syntax [=c=], which matches all characters with the same collation weight (as defined by the current locale) as the character c. Within [ and ], the syntax [.symbol.] matches the collating symbol symbol. If the extglob shell option is enabled using the shopt builtin, the shell recognizes several extended pattern matching operators. In the following description, a pattern-list is a list of one or more patterns separated by a |. Composite patterns may be formed using one or more of the following sub-patterns: ?(pattern-list) Matches zero or one occurrence of the given patterns *(pattern-list) Matches zero or more occurrences of the given patterns +(pattern-list) Matches one or more occurrences of the given patterns @(pattern-list) Matches one of the given patterns !(pattern-list) Matches anything except one of the given patterns Theextglob option changes the behavior of the parser, since the parentheses are normally treated as operators with syntactic meaning. To ensure that extended matching patterns are parsed correctly, make sure that extglob is enabled before parsing constructs containing the patterns, including shell functions and command substitutions. When matching filenames, the dotglob shell option determines the set of filenames that are tested: when dotglob is enabled, the set of filenames includes all files beginning with ``.'', but ``.'' and ``..'' must be matched by a pattern or sub-pattern that begins with a dot; when it is disabled, the set does not include any filenames beginning with ``.'' unless the pattern or sub- pattern begins with a ``.''. As above, ``.'' only has a special meaning when matching filenames. Complicated extended pattern matching against long strings is slow, especially when the patterns contain alternations and the strings contain multiple matches. Using separate matches against shorter strings, or using arrays of strings instead of a single long string, may be faster. Quote Removal After the preceding expansions, all unquoted occurrences of the characters \, ', and " that did not result from one of the above expansions are removed. REDIRECTION top Before a command is executed, its input and output may be redirected using a special notation interpreted by the shell. Redirection allows commands' file handles to be duplicated, opened, closed, made to refer to different files, and can change the files the command reads from and writes to. Redirection may also be used to modify file handles in the current shell execution environment. The following redirection operators may precede or appear anywhere within a simple command or may follow a command. Redirections are processed in the order they appear, from left to right. Each redirection that may be preceded by a file descriptor number may instead be preceded by a word of the form {varname}. In this case, for each redirection operator except >&- and <&-, the shell will allocate a file descriptor greater than or equal to 10 and assign it to varname. If >&- or <&- is preceded by {varname}, the value of varname defines the file descriptor to close. If {varname} is supplied, the redirection persists beyond the scope of the command, allowing the shell programmer to manage the file descriptor's lifetime manually. The varredir_close shell option manages this behavior. In the following descriptions, if the file descriptor number is omitted, and the first character of the redirection operator is <, the redirection refers to the standard input (file descriptor 0). If the first character of the redirection operator is >, the redirection refers to the standard output (file descriptor 1). The word following the redirection operator in the following descriptions, unless otherwise noted, is subjected to brace expansion, tilde expansion, parameter and variable expansion, command substitution, arithmetic expansion, quote removal, pathname expansion, and word splitting. If it expands to more than one word, bash reports an error. Note that the order of redirections is significant. For example, the command ls > dirlist 2>&1 directs both standard output and standard error to the file dirlist, while the command ls 2>&1 > dirlist directs only the standard output to file dirlist, because the standard error was duplicated from the standard output before the standard output was redirected to dirlist. Bash handles several filenames specially when they are used in redirections, as described in the following table. If the operating system on which bash is running provides these special files, bash will use them; otherwise it will emulate them internally with the behavior described below. /dev/fd/fd If fd is a valid integer, file descriptor fd is duplicated. /dev/stdin File descriptor 0 is duplicated. /dev/stdout File descriptor 1 is duplicated. /dev/stderr File descriptor 2 is duplicated. /dev/tcp/host/port If host is a valid hostname or Internet address, and port is an integer port number or service name, bash attempts to open the corresponding TCP socket. /dev/udp/host/port If host is a valid hostname or Internet address, and port is an integer port number or service name, bash attempts to open the corresponding UDP socket. A failure to open or create a file causes the redirection to fail. Redirections using file descriptors greater than 9 should be used with care, as they may conflict with file descriptors the shell uses internally. Redirecting Input Redirection of input causes the file whose name results from the expansion of word to be opened for reading on file descriptor n, or the standard input (file descriptor 0) if n is not specified. The general format for redirecting input is: [n]<word Redirecting Output Redirection of output causes the file whose name results from the expansion of word to be opened for writing on file descriptor n, or the standard output (file descriptor 1) if n is not specified. If the file does not exist it is created; if it does exist it is truncated to zero size. The general format for redirecting output is: [n]>word If the redirection operator is >, and the noclobber option to the set builtin has been enabled, the redirection will fail if the file whose name results from the expansion of word exists and is a regular file. If the redirection operator is >|, or the redirection operator is > and the noclobber option to the set builtin command is not enabled, the redirection is attempted even if the file named by word exists. Appending Redirected Output Redirection of output in this fashion causes the file whose name results from the expansion of word to be opened for appending on file descriptor n, or the standard output (file descriptor 1) if n is not specified. If the file does not exist it is created. The general format for appending output is: [n]>>word Redirecting Standard Output and Standard Error This construct allows both the standard output (file descriptor 1) and the standard error output (file descriptor 2) to be redirected to the file whose name is the expansion of word. There are two formats for redirecting standard output and standard error: &>word and >&word Of the two forms, the first is preferred. This is semantically equivalent to >word 2>&1 When using the second form, word may not expand to a number or -. If it does, other redirection operators apply (see Duplicating File Descriptors below) for compatibility reasons. Appending Standard Output and Standard Error This construct allows both the standard output (file descriptor 1) and the standard error output (file descriptor 2) to be appended to the file whose name is the expansion of word. The format for appending standard output and standard error is: &>>word This is semantically equivalent to >>word 2>&1 (see Duplicating File Descriptors below). Here Documents This type of redirection instructs the shell to read input from the current source until a line containing only delimiter (with no trailing blanks) is seen. All of the lines read up to that point are then used as the standard input (or file descriptor n if n is specified) for a command. The format of here-documents is: [n]<<[-]word here-document delimiter No parameter and variable expansion, command substitution, arithmetic expansion, or pathname expansion is performed on word. If any part of word is quoted, the delimiter is the result of quote removal on word, and the lines in the here-document are not expanded. If word is unquoted, all lines of the here-document are subjected to parameter expansion, command substitution, and arithmetic expansion, the character sequence \<newline> is ignored, and \ must be used to quote the characters \, $, and `. If the redirection operator is <<-, then all leading tab characters are stripped from input lines and the line containing delimiter. This allows here-documents within shell scripts to be indented in a natural fashion. Here Strings A variant of here documents, the format is: [n]<<<word The word undergoes tilde expansion, parameter and variable expansion, command substitution, arithmetic expansion, and quote removal. Pathname expansion and word splitting are not performed. The result is supplied as a single string, with a newline appended, to the command on its standard input (or file descriptor n if n is specified). Duplicating File Descriptors The redirection operator [n]<&word is used to duplicate input file descriptors. If word expands to one or more digits, the file descriptor denoted by n is made to be a copy of that file descriptor. If the digits in word do not specify a file descriptor open for input, a redirection error occurs. If word evaluates to -, file descriptor n is closed. If n is not specified, the standard input (file descriptor 0) is used. The operator [n]>&word is used similarly to duplicate output file descriptors. If n is not specified, the standard output (file descriptor 1) is used. If the digits in word do not specify a file descriptor open for output, a redirection error occurs. If word evaluates to -, file descriptor n is closed. As a special case, if n is omitted, and word does not expand to one or more digits or -, the standard output and standard error are redirected as described previously. Moving File Descriptors The redirection operator [n]<&digit- moves the file descriptor digit to file descriptor n, or the standard input (file descriptor 0) if n is not specified. digit is closed after being duplicated to n. Similarly, the redirection operator [n]>&digit- moves the file descriptor digit to file descriptor n, or the standard output (file descriptor 1) if n is not specified. Opening File Descriptors for Reading and Writing The redirection operator [n]<>word causes the file whose name is the expansion of word to be opened for both reading and writing on file descriptor n, or on file descriptor 0 if n is not specified. If the file does not exist, it is created. ALIASES top Aliases allow a string to be substituted for a word when it is used as the first word of a simple command. The shell maintains a list of aliases that may be set and unset with the alias and unalias builtin commands (see SHELL BUILTIN COMMANDS below). The first word of each simple command, if unquoted, is checked to see if it has an alias. If so, that word is replaced by the text of the alias. The characters /, $, `, and = and any of the shell metacharacters or quoting characters listed above may not appear in an alias name. The replacement text may contain any valid shell input, including shell metacharacters. The first word of the replacement text is tested for aliases, but a word that is identical to an alias being expanded is not expanded a second time. This means that one may alias ls to ls -F, for instance, and bash does not try to recursively expand the replacement text. If the last character of the alias value is a blank, then the next command word following the alias is also checked for alias expansion. Aliases are created and listed with the alias command, and removed with the unalias command. There is no mechanism for using arguments in the replacement text. If arguments are needed, use a shell function (see FUNCTIONS below). Aliases are not expanded when the shell is not interactive, unless the expand_aliases shell option is set using shopt (see the description of shopt under SHELL BUILTIN COMMANDS below). The rules concerning the definition and use of aliases are somewhat confusing. Bash always reads at least one complete line of input, and all lines that make up a compound command, before executing any of the commands on that line or the compound command. Aliases are expanded when a command is read, not when it is executed. Therefore, an alias definition appearing on the same line as another command does not take effect until the next line of input is read. The commands following the alias definition on that line are not affected by the new alias. This behavior is also an issue when functions are executed. Aliases are expanded when a function definition is read, not when the function is executed, because a function definition is itself a command. As a consequence, aliases defined in a function are not available until after that function is executed. To be safe, always put alias definitions on a separate line, and do not use alias in compound commands. For almost every purpose, aliases are superseded by shell functions. FUNCTIONS top A shell function, defined as described above under SHELL GRAMMAR, stores a series of commands for later execution. When the name of a shell function is used as a simple command name, the list of commands associated with that function name is executed. Functions are executed in the context of the current shell; no new process is created to interpret them (contrast this with the execution of a shell script). When a function is executed, the arguments to the function become the positional parameters during its execution. The special parameter # is updated to reflect the change. Special parameter 0 is unchanged. The first element of the FUNCNAME variable is set to the name of the function while the function is executing. All other aspects of the shell execution environment are identical between a function and its caller with these exceptions: the DEBUG and RETURN traps (see the description of the trap builtin under SHELL BUILTIN COMMANDS below) are not inherited unless the function has been given the trace attribute (see the description of the declare builtin below) or the -o functrace shell option has been enabled with the set builtin (in which case all functions inherit the DEBUG and RETURN traps), and the ERR trap is not inherited unless the -o errtrace shell option has been enabled. Variables local to the function may be declared with the local builtin command (local variables). Ordinarily, variables and their values are shared between the function and its caller. If a variable is declared local, the variable's visible scope is restricted to that function and its children (including the functions it calls). In the following description, the current scope is a currently- executing function. Previous scopes consist of that function's caller and so on, back to the "global" scope, where the shell is not executing any shell function. Consequently, a local variable at the current scope is a variable declared using the local or declare builtins in the function that is currently executing. Local variables "shadow" variables with the same name declared at previous scopes. For instance, a local variable declared in a function hides a global variable of the same name: references and assignments refer to the local variable, leaving the global variable unmodified. When the function returns, the global variable is once again visible. The shell uses dynamic scoping to control a variable's visibility within functions. With dynamic scoping, visible variables and their values are a result of the sequence of function calls that caused execution to reach the current function. The value of a variable that a function sees depends on its value within its caller, if any, whether that caller is the "global" scope or another shell function. This is also the value that a local variable declaration "shadows", and the value that is restored when the function returns. For example, if a variable var is declared as local in function func1, and func1 calls another function func2, references to var made from within func2 will resolve to the local variable var from func1, shadowing any global variable named var. The unset builtin also acts using the same dynamic scope: if a variable is local to the current scope, unset will unset it; otherwise the unset will refer to the variable found in any calling scope as described above. If a variable at the current local scope is unset, it will remain so (appearing as unset) until it is reset in that scope or until the function returns. Once the function returns, any instance of the variable at a previous scope will become visible. If the unset acts on a variable at a previous scope, any instance of a variable with that name that had been shadowed will become visible (see below how the localvar_unset shell option changes this behavior). The FUNCNEST variable, if set to a numeric value greater than 0, defines a maximum function nesting level. Function invocations that exceed the limit cause the entire command to abort. If the builtin command return is executed in a function, the function completes and execution resumes with the next command after the function call. Any command associated with the RETURN trap is executed before execution resumes. When a function completes, the values of the positional parameters and the special parameter # are restored to the values they had prior to the function's execution. Function names and definitions may be listed with the -f option to the declare or typeset builtin commands. The -F option to declare or typeset will list the function names only (and optionally the source file and line number, if the extdebug shell option is enabled). Functions may be exported so that child shell processes (those created when executing a separate shell invocation) automatically have them defined with the -f option to the export builtin. A function definition may be deleted using the -f option to the unset builtin. Functions may be recursive. The FUNCNEST variable may be used to limit the depth of the function call stack and restrict the number of function invocations. By default, no limit is imposed on the number of recursive calls. ARITHMETIC EVALUATION top The shell allows arithmetic expressions to be evaluated, under certain circumstances (see the let and declare builtin commands, the (( compound command, and Arithmetic Expansion). Evaluation is done in fixed-width integers with no check for overflow, though division by 0 is trapped and flagged as an error. The operators and their precedence, associativity, and values are the same as in the C language. The following list of operators is grouped into levels of equal-precedence operators. The levels are listed in order of decreasing precedence. id++ id-- variable post-increment and post-decrement - + unary minus and plus ++id --id variable pre-increment and pre-decrement ! ~ logical and bitwise negation ** exponentiation * / % multiplication, division, remainder + - addition, subtraction << >> left and right bitwise shifts <= >= < > comparison == != equality and inequality & bitwise AND ^ bitwise exclusive OR | bitwise OR && logical AND || logical OR expr?expr:expr conditional operator = *= /= %= += -= <<= >>= &= ^= |= assignment expr1 , expr2 comma Shell variables are allowed as operands; parameter expansion is performed before the expression is evaluated. Within an expression, shell variables may also be referenced by name without using the parameter expansion syntax. A shell variable that is null or unset evaluates to 0 when referenced by name without using the parameter expansion syntax. The value of a variable is evaluated as an arithmetic expression when it is referenced, or when a variable which has been given the integer attribute using declare -i is assigned a value. A null value evaluates to 0. A shell variable need not have its integer attribute turned on to be used in an expression. Integer constants follow the C language definition, without suffixes or character constants. Constants with a leading 0 are interpreted as octal numbers. A leading 0x or 0X denotes hexadecimal. Otherwise, numbers take the form [base#]n, where the optional base is a decimal number between 2 and 64 representing the arithmetic base, and n is a number in that base. If base# is omitted, then base 10 is used. When specifying n, if a non-digit is required, the digits greater than 9 are represented by the lowercase letters, the uppercase letters, @, and _, in that order. If base is less than or equal to 36, lowercase and uppercase letters may be used interchangeably to represent numbers between 10 and 35. Operators are evaluated in order of precedence. Sub-expressions in parentheses are evaluated first and may override the precedence rules above. CONDITIONAL EXPRESSIONS top Conditional expressions are used by the [[ compound command and the test and [ builtin commands to test file attributes and perform string and arithmetic comparisons. The test and [ commands determine their behavior based on the number of arguments; see the descriptions of those commands for any other command-specific actions. Expressions are formed from the following unary or binary primaries. Bash handles several filenames specially when they are used in expressions. If the operating system on which bash is running provides these special files, bash will use them; otherwise it will emulate them internally with this behavior: If any file argument to one of the primaries is of the form /dev/fd/n, then file descriptor n is checked. If the file argument to one of the primaries is one of /dev/stdin, /dev/stdout, or /dev/stderr, file descriptor 0, 1, or 2, respectively, is checked. Unless otherwise specified, primaries that operate on files follow symbolic links and operate on the target of the link, rather than the link itself. When used with [[, the < and > operators sort lexicographically using the current locale. The test command sorts using ASCII ordering. -a file True if file exists. -b file True if file exists and is a block special file. -c file True if file exists and is a character special file. -d file True if file exists and is a directory. -e file True if file exists. -f file True if file exists and is a regular file. -g file True if file exists and is set-group-id. -h file True if file exists and is a symbolic link. -k file True if file exists and its ``sticky'' bit is set. -p file True if file exists and is a named pipe (FIFO). -r file True if file exists and is readable. -s file True if file exists and has a size greater than zero. -t fd True if file descriptor fd is open and refers to a terminal. -u file True if file exists and its set-user-id bit is set. -w file True if file exists and is writable. -x file True if file exists and is executable. -G file True if file exists and is owned by the effective group id. -L file True if file exists and is a symbolic link. -N file True if file exists and has been modified since it was last read. -O file True if file exists and is owned by the effective user id. -S file True if file exists and is a socket. file1 -ef file2 True if file1 and file2 refer to the same device and inode numbers. file1 -nt file2 True if file1 is newer (according to modification date) than file2, or if file1 exists and file2 does not. file1 -ot file2 True if file1 is older than file2, or if file2 exists and file1 does not. -o optname True if the shell option optname is enabled. See the list of options under the description of the -o option to the set builtin below. -v varname True if the shell variable varname is set (has been assigned a value). -R varname True if the shell variable varname is set and is a name reference. -z string True if the length of string is zero. string -n string True if the length of string is non-zero. string1 == string2 string1 = string2 True if the strings are equal. = should be used with the test command for POSIX conformance. When used with the [[ command, this performs pattern matching as described above (Compound Commands). string1 != string2 True if the strings are not equal. string1 < string2 True if string1 sorts before string2 lexicographically. string1 > string2 True if string1 sorts after string2 lexicographically. arg1 OP arg2 OP is one of -eq, -ne, -lt, -le, -gt, or -ge. These arithmetic binary operators return true if arg1 is equal to, not equal to, less than, less than or equal to, greater than, or greater than or equal to arg2, respectively. Arg1 and arg2 may be positive or negative integers. When used with the [[ command, Arg1 and Arg2 are evaluated as arithmetic expressions (see ARITHMETIC EVALUATION above). SIMPLE COMMAND EXPANSION top When a simple command is executed, the shell performs the following expansions, assignments, and redirections, from left to right, in the following order. 1. The words that the parser has marked as variable assignments (those preceding the command name) and redirections are saved for later processing. 2. The words that are not variable assignments or redirections are expanded. If any words remain after expansion, the first word is taken to be the name of the command and the remaining words are the arguments. 3. Redirections are performed as described above under REDIRECTION. 4. The text after the = in each variable assignment undergoes tilde expansion, parameter expansion, command substitution, arithmetic expansion, and quote removal before being assigned to the variable. If no command name results, the variable assignments affect the current shell environment. In the case of such a command (one that consists only of assignment statements and redirections), assignment statements are performed before redirections. Otherwise, the variables are added to the environment of the executed command and do not affect the current shell environment. If any of the assignments attempts to assign a value to a readonly variable, an error occurs, and the command exits with a non-zero status. If no command name results, redirections are performed, but do not affect the current shell environment. A redirection error causes the command to exit with a non-zero status. If there is a command name left after expansion, execution proceeds as described below. Otherwise, the command exits. If one of the expansions contained a command substitution, the exit status of the command is the exit status of the last command substitution performed. If there were no command substitutions, the command exits with a status of zero. COMMAND EXECUTION top After a command has been split into words, if it results in a simple command and an optional list of arguments, the following actions are taken. If the command name contains no slashes, the shell attempts to locate it. If there exists a shell function by that name, that function is invoked as described above in FUNCTIONS. If the name does not match a function, the shell searches for it in the list of shell builtins. If a match is found, that builtin is invoked. If the name is neither a shell function nor a builtin, and contains no slashes, bash searches each element of the PATH for a directory containing an executable file by that name. Bash uses a hash table to remember the full pathnames of executable files (see hash under SHELL BUILTIN COMMANDS below). A full search of the directories in PATH is performed only if the command is not found in the hash table. If the search is unsuccessful, the shell searches for a defined shell function named command_not_found_handle. If that function exists, it is invoked in a separate execution environment with the original command and the original command's arguments as its arguments, and the function's exit status becomes the exit status of that subshell. If that function is not defined, the shell prints an error message and returns an exit status of 127. If the search is successful, or if the command name contains one or more slashes, the shell executes the named program in a separate execution environment. Argument 0 is set to the name given, and the remaining arguments to the command are set to the arguments given, if any. If this execution fails because the file is not in executable format, and the file is not a directory, it is assumed to be a shell script, a file containing shell commands, and the shell creates a new instance of itself to execute it. This subshell reinitializes itself, so that the effect is as if a new shell had been invoked to handle the script, with the exception that the locations of commands remembered by the parent (see hash below under SHELL BUILTIN COMMANDS) are retained by the child. If the program is a file beginning with #!, the remainder of the first line specifies an interpreter for the program. The shell executes the specified interpreter on operating systems that do not handle this executable format themselves. The arguments to the interpreter consist of a single optional argument following the interpreter name on the first line of the program, followed by the name of the program, followed by the command arguments, if any. COMMAND EXECUTION ENVIRONMENT top The shell has an execution environment, which consists of the following: open files inherited by the shell at invocation, as modified by redirections supplied to the exec builtin the current working directory as set by cd, pushd, or popd, or inherited by the shell at invocation the file creation mode mask as set by umask or inherited from the shell's parent current traps set by trap shell parameters that are set by variable assignment or with set or inherited from the shell's parent in the environment shell functions defined during execution or inherited from the shell's parent in the environment options enabled at invocation (either by default or with command-line arguments) or by set options enabled by shopt shell aliases defined with alias various process IDs, including those of background jobs, the value of $$, and the value of PPID When a simple command other than a builtin or shell function is to be executed, it is invoked in a separate execution environment that consists of the following. Unless otherwise noted, the values are inherited from the shell. the shell's open files, plus any modifications and additions specified by redirections to the command the current working directory the file creation mode mask shell variables and functions marked for export, along with variables exported for the command, passed in the environment traps caught by the shell are reset to the values inherited from the shell's parent, and traps ignored by the shell are ignored A command invoked in this separate environment cannot affect the shell's execution environment. A subshell is a copy of the shell process. Command substitution, commands grouped with parentheses, and asynchronous commands are invoked in a subshell environment that is a duplicate of the shell environment, except that traps caught by the shell are reset to the values that the shell inherited from its parent at invocation. Builtin commands that are invoked as part of a pipeline are also executed in a subshell environment. Changes made to the subshell environment cannot affect the shell's execution environment. Subshells spawned to execute command substitutions inherit the value of the -e option from the parent shell. When not in posix mode, bash clears the -e option in such subshells. If a command is followed by a & and job control is not active, the default standard input for the command is the empty file /dev/null. Otherwise, the invoked command inherits the file descriptors of the calling shell as modified by redirections. ENVIRONMENT top When a program is invoked it is given an array of strings called the environment. This is a list of name-value pairs, of the form name=value. The shell provides several ways to manipulate the environment. On invocation, the shell scans its own environment and creates a parameter for each name found, automatically marking it for export to child processes. Executed commands inherit the environment. The export and declare -x commands allow parameters and functions to be added to and deleted from the environment. If the value of a parameter in the environment is modified, the new value becomes part of the environment, replacing the old. The environment inherited by any executed command consists of the shell's initial environment, whose values may be modified in the shell, less any pairs removed by the unset command, plus any additions via the export and declare -x commands. The environment for any simple command or function may be augmented temporarily by prefixing it with parameter assignments, as described above in PARAMETERS. These assignment statements affect only the environment seen by that command. If the -k option is set (see the set builtin command below), then all parameter assignments are placed in the environment for a command, not just those that precede the command name. When bash invokes an external command, the variable _ is set to the full filename of the command and passed to that command in its environment. EXIT STATUS top The exit status of an executed command is the value returned by the waitpid system call or equivalent function. Exit statuses fall between 0 and 255, though, as explained below, the shell may use values above 125 specially. Exit statuses from shell builtins and compound commands are also limited to this range. Under certain circumstances, the shell will use special values to indicate specific failure modes. For the shell's purposes, a command which exits with a zero exit status has succeeded. An exit status of zero indicates success. A non-zero exit status indicates failure. When a command terminates on a fatal signal N, bash uses the value of 128+N as the exit status. If a command is not found, the child process created to execute it returns a status of 127. If a command is found but is not executable, the return status is 126. If a command fails because of an error during expansion or redirection, the exit status is greater than zero. Shell builtin commands return a status of 0 (true) if successful, and non-zero (false) if an error occurs while they execute. All builtins return an exit status of 2 to indicate incorrect usage, generally invalid options or missing arguments. The exit status of the last command is available in the special parameter $?. Bash itself returns the exit status of the last command executed, unless a syntax error occurs, in which case it exits with a non- zero value. See also the exit builtin command below. SIGNALS top When bash is interactive, in the absence of any traps, it ignores SIGTERM (so that kill 0 does not kill an interactive shell), and SIGINT is caught and handled (so that the wait builtin is interruptible). In all cases, bash ignores SIGQUIT. If job control is in effect, bash ignores SIGTTIN, SIGTTOU, and SIGTSTP. Non-builtin commands run by bash have signal handlers set to the values inherited by the shell from its parent. When job control is not in effect, asynchronous commands ignore SIGINT and SIGQUIT in addition to these inherited handlers. Commands run as a result of command substitution ignore the keyboard-generated job control signals SIGTTIN, SIGTTOU, and SIGTSTP. The shell exits by default upon receipt of a SIGHUP. Before exiting, an interactive shell resends the SIGHUP to all jobs, running or stopped. Stopped jobs are sent SIGCONT to ensure that they receive the SIGHUP. To prevent the shell from sending the signal to a particular job, it should be removed from the jobs table with the disown builtin (see SHELL BUILTIN COMMANDS below) or marked to not receive SIGHUP using disown -h. If the huponexit shell option has been set with shopt, bash sends a SIGHUP to all jobs when an interactive login shell exits. If bash is waiting for a command to complete and receives a signal for which a trap has been set, the trap will not be executed until the command completes. When bash is waiting for an asynchronous command via the wait builtin, the reception of a signal for which a trap has been set will cause the wait builtin to return immediately with an exit status greater than 128, immediately after which the trap is executed. When job control is not enabled, and bash is waiting for a foreground command to complete, the shell receives keyboard- generated signals such as SIGINT (usually generated by ^C) that users commonly intend to send to that command. This happens because the shell and the command are in the same process group as the terminal, and ^C sends SIGINT to all processes in that process group. When bash is running without job control enabled and receives SIGINT while waiting for a foreground command, it waits until that foreground command terminates and then decides what to do about the SIGINT: 1. If the command terminates due to the SIGINT, bash concludes that the user meant to end the entire script, and acts on the SIGINT (e.g., by running a SIGINT trap or exiting itself); 2. If the command does not terminate due to SIGINT, the program handled the SIGINT itself and did not treat it as a fatal signal. In that case, bash does not treat SIGINT as a fatal signal, either, instead assuming that the SIGINT was used as part of the program's normal operation (e.g., emacs uses it to abort editing commands) or deliberately discarded. However, bash will run any trap set on SIGINT, as it does with any other trapped signal it receives while it is waiting for the foreground command to complete, for compatibility. JOB CONTROL top Job control refers to the ability to selectively stop (suspend) the execution of processes and continue (resume) their execution at a later point. A user typically employs this facility via an interactive interface supplied jointly by the operating system kernel's terminal driver and bash. The shell associates a job with each pipeline. It keeps a table of currently executing jobs, which may be listed with the jobs command. When bash starts a job asynchronously (in the background), it prints a line that looks like: [1] 25647 indicating that this job is job number 1 and that the process ID of the last process in the pipeline associated with this job is 25647. All of the processes in a single pipeline are members of the same job. Bash uses the job abstraction as the basis for job control. To facilitate the implementation of the user interface to job control, the operating system maintains the notion of a current terminal process group ID. Members of this process group (processes whose process group ID is equal to the current terminal process group ID) receive keyboard-generated signals such as SIGINT. These processes are said to be in the foreground. Background processes are those whose process group ID differs from the terminal's; such processes are immune to keyboard-generated signals. Only foreground processes are allowed to read from or, if the user so specifies with stty tostop, write to the terminal. Background processes which attempt to read from (write to when stty tostop is in effect) the terminal are sent a SIGTTIN (SIGTTOU) signal by the kernel's terminal driver, which, unless caught, suspends the process. If the operating system on which bash is running supports job control, bash contains facilities to use it. Typing the suspend character (typically ^Z, Control-Z) while a process is running causes that process to be stopped and returns control to bash. Typing the delayed suspend character (typically ^Y, Control-Y) causes the process to be stopped when it attempts to read input from the terminal, and control to be returned to bash. The user may then manipulate the state of this job, using the bg command to continue it in the background, the fg command to continue it in the foreground, or the kill command to kill it. A ^Z takes effect immediately, and has the additional side effect of causing pending output and typeahead to be discarded. There are a number of ways to refer to a job in the shell. The character % introduces a job specification (jobspec). Job number n may be referred to as %n. A job may also be referred to using a prefix of the name used to start it, or using a substring that appears in its command line. For example, %ce refers to a stopped job whose command name begins with ce. If a prefix matches more than one job, bash reports an error. Using %?ce, on the other hand, refers to any job containing the string ce in its command line. If the substring matches more than one job, bash reports an error. The symbols %% and %+ refer to the shell's notion of the current job, which is the last job stopped while it was in the foreground or started in the background. The previous job may be referenced using %-. If there is only a single job, %+ and %- can both be used to refer to that job. In output pertaining to jobs (e.g., the output of the jobs command), the current job is always flagged with a +, and the previous job with a -. A single % (with no accompanying job specification) also refers to the current job. Simply naming a job can be used to bring it into the foreground: %1 is a synonym for ``fg %1'', bringing job 1 from the background into the foreground. Similarly, ``%1 &'' resumes job 1 in the background, equivalent to ``bg %1''. The shell learns immediately whenever a job changes state. Normally, bash waits until it is about to print a prompt before reporting changes in a job's status so as to not interrupt any other output. If the -b option to the set builtin command is enabled, bash reports such changes immediately. Any trap on SIGCHLD is executed for each child that exits. If an attempt to exit bash is made while jobs are stopped (or, if the checkjobs shell option has been enabled using the shopt builtin, running), the shell prints a warning message, and, if the checkjobs option is enabled, lists the jobs and their statuses. The jobs command may then be used to inspect their status. If a second attempt to exit is made without an intervening command, the shell does not print another warning, and any stopped jobs are terminated. When the shell is waiting for a job or process using the wait builtin, and job control is enabled, wait will return when the job changes state. The -f option causes wait to wait until the job or process terminates before returning. PROMPTING top When executing interactively, bash displays the primary prompt PS1 when it is ready to read a command, and the secondary prompt PS2 when it needs more input to complete a command. Bash displays PS0 after it reads a command but before executing it. Bash displays PS4 as described above before tracing each command when the -x option is enabled. Bash allows these prompt strings to be customized by inserting a number of backslash-escaped special characters that are decoded as follows: \a an ASCII bell character (07) \d the date in "Weekday Month Date" format (e.g., "Tue May 26") \D{format} the format is passed to strftime(3) and the result is inserted into the prompt string; an empty format results in a locale-specific time representation. The braces are required \e an ASCII escape character (033) \h the hostname up to the first `.' \H the hostname \j the number of jobs currently managed by the shell \l the basename of the shell's terminal device name \n newline \r carriage return \s the name of the shell, the basename of $0 (the portion following the final slash) \t the current time in 24-hour HH:MM:SS format \T the current time in 12-hour HH:MM:SS format \@ the current time in 12-hour am/pm format \A the current time in 24-hour HH:MM format \u the username of the current user \v the version of bash (e.g., 2.00) \V the release of bash, version + patch level (e.g., 2.00.0) \w the value of the PWD shell variable ($PWD), with $HOME abbreviated with a tilde (uses the value of the PROMPT_DIRTRIM variable) \W the basename of $PWD, with $HOME abbreviated with a tilde \! the history number of this command \# the command number of this command \$ if the effective UID is 0, a #, otherwise a $ \nnn the character corresponding to the octal number nnn \\ a backslash \[ begin a sequence of non-printing characters, which could be used to embed a terminal control sequence into the prompt \] end a sequence of non-printing characters The command number and the history number are usually different: the history number of a command is its position in the history list, which may include commands restored from the history file (see HISTORY below), while the command number is the position in the sequence of commands executed during the current shell session. After the string is decoded, it is expanded via parameter expansion, command substitution, arithmetic expansion, and quote removal, subject to the value of the promptvars shell option (see the description of the shopt command under SHELL BUILTIN COMMANDS below). This can have unwanted side effects if escaped portions of the string appear within command substitution or contain characters special to word expansion. READLINE top This is the library that handles reading input when using an interactive shell, unless the --noediting option is given at shell invocation. Line editing is also used when using the -e option to the read builtin. By default, the line editing commands are similar to those of Emacs. A vi-style line editing interface is also available. Line editing can be enabled at any time using the -o emacs or -o vi options to the set builtin (see SHELL BUILTIN COMMANDS below). To turn off line editing after the shell is running, use the +o emacs or +o vi options to the set builtin. Readline Notation In this section, the Emacs-style notation is used to denote keystrokes. Control keys are denoted by C-key, e.g., C-n means Control-N. Similarly, meta keys are denoted by M-key, so M-x means Meta-X. (On keyboards without a meta key, M-x means ESC x, i.e., press the Escape key then the x key. This makes ESC the meta prefix. The combination M-C-x means ESC-Control-x, or press the Escape key then hold the Control key while pressing the x key.) Readline commands may be given numeric arguments, which normally act as a repeat count. Sometimes, however, it is the sign of the argument that is significant. Passing a negative argument to a command that acts in the forward direction (e.g., kill-line) causes that command to act in a backward direction. Commands whose behavior with arguments deviates from this are noted below. When a command is described as killing text, the text deleted is saved for possible future retrieval (yanking). The killed text is saved in a kill ring. Consecutive kills cause the text to be accumulated into one unit, which can be yanked all at once. Commands which do not kill text separate the chunks of text on the kill ring. Readline Initialization Readline is customized by putting commands in an initialization file (the inputrc file). The name of this file is taken from the value of the INPUTRC variable. If that variable is unset, the default is ~/.inputrc. If that file does not exist or cannot be read, the ultimate default is /etc/inputrc. When a program which uses the readline library starts up, the initialization file is read, and the key bindings and variables are set. There are only a few basic constructs allowed in the readline initialization file. Blank lines are ignored. Lines beginning with a # are comments. Lines beginning with a $ indicate conditional constructs. Other lines denote key bindings and variable settings. The default key-bindings may be changed with an inputrc file. Other programs that use this library may add their own commands and bindings. For example, placing M-Control-u: universal-argument or C-Meta-u: universal-argument into the inputrc would make M-C-u execute the readline command universal-argument. The following symbolic character names are recognized: RUBOUT, DEL, ESC, LFD, NEWLINE, RET, RETURN, SPC, SPACE, and TAB. In addition to command names, readline allows keys to be bound to a string that is inserted when the key is pressed (a macro). Readline Key Bindings The syntax for controlling key bindings in the inputrc file is simple. All that is required is the name of the command or the text of a macro and a key sequence to which it should be bound. The name may be specified in one of two ways: as a symbolic key name, possibly with Meta- or Control- prefixes, or as a key sequence. When using the form keyname:function-name or macro, keyname is the name of a key spelled out in English. For example: Control-u: universal-argument Meta-Rubout: backward-kill-word Control-o: "> output" In the above example, C-u is bound to the function universal-argument, M-DEL is bound to the function backward-kill-word, and C-o is bound to run the macro expressed on the right hand side (that is, to insert the text ``> output'' into the line). In the second form, "keyseq":function-name or macro, keyseq differs from keyname above in that strings denoting an entire key sequence may be specified by placing the sequence within double quotes. Some GNU Emacs style key escapes can be used, as in the following example, but the symbolic character names are not recognized. "\C-u": universal-argument "\C-x\C-r": re-read-init-file "\e[11~": "Function Key 1" In this example, C-u is again bound to the function universal-argument. C-x C-r is bound to the function re-read-init-file, and ESC [ 1 1 ~ is bound to insert the text ``Function Key 1''. The full set of GNU Emacs style escape sequences is \C- control prefix \M- meta prefix \e an escape character \\ backslash \" literal " \' literal ' In addition to the GNU Emacs style escape sequences, a second set of backslash escapes is available: \a alert (bell) \b backspace \d delete \f form feed \n newline \r carriage return \t horizontal tab \v vertical tab \nnn the eight-bit character whose value is the octal value nnn (one to three digits) \xHH the eight-bit character whose value is the hexadecimal value HH (one or two hex digits) When entering the text of a macro, single or double quotes must be used to indicate a macro definition. Unquoted text is assumed to be a function name. In the macro body, the backslash escapes described above are expanded. Backslash will quote any other character in the macro text, including " and '. Bash allows the current readline key bindings to be displayed or modified with the bind builtin command. The editing mode may be switched during interactive use by using the -o option to the set builtin command (see SHELL BUILTIN COMMANDS below). Readline Variables Readline has variables that can be used to further customize its behavior. A variable may be set in the inputrc file with a statement of the form set variable-name value or using the bind builtin command (see SHELL BUILTIN COMMANDS below). Except where noted, readline variables can take the values On or Off (without regard to case). Unrecognized variable names are ignored. When a variable value is read, empty or null values, "on" (case-insensitive), and "1" are equivalent to On. All other values are equivalent to Off. The variables and their default values are: active-region-start-color A string variable that controls the text color and background when displaying the text in the active region (see the description of enable-active-region below). This string must not take up any physical character positions on the display, so it should consist only of terminal escape sequences. It is output to the terminal before displaying the text in the active region. This variable is reset to the default value whenever the terminal type changes. The default value is the string that puts the terminal in standout mode, as obtained from the terminal's terminfo description. A sample value might be "\e[01;33m". active-region-end-color A string variable that "undoes" the effects of active-region-start-color and restores "normal" terminal display appearance after displaying text in the active region. This string must not take up any physical character positions on the display, so it should consist only of terminal escape sequences. It is output to the terminal after displaying the text in the active region. This variable is reset to the default value whenever the terminal type changes. The default value is the string that restores the terminal from standout mode, as obtained from the terminal's terminfo description. A sample value might be "\e[0m". bell-style (audible) Controls what happens when readline wants to ring the terminal bell. If set to none, readline never rings the bell. If set to visible, readline uses a visible bell if one is available. If set to audible, readline attempts to ring the terminal's bell. bind-tty-special-chars (On) If set to On, readline attempts to bind the control characters treated specially by the kernel's terminal driver to their readline equivalents. blink-matching-paren (Off) If set to On, readline attempts to briefly move the cursor to an opening parenthesis when a closing parenthesis is inserted. colored-completion-prefix (Off) If set to On, when listing completions, readline displays the common prefix of the set of possible completions using a different color. The color definitions are taken from the value of the LS_COLORS environment variable. If there is a color definition in $LS_COLORS for the custom suffix "readline-colored-completion-prefix", readline uses this color for the common prefix instead of its default. colored-stats (Off) If set to On, readline displays possible completions using different colors to indicate their file type. The color definitions are taken from the value of the LS_COLORS environment variable. comment-begin (``#'') The string that is inserted when the readline insert-comment command is executed. This command is bound to M-# in emacs mode and to # in vi command mode. completion-display-width (-1) The number of screen columns used to display possible matches when performing completion. The value is ignored if it is less than 0 or greater than the terminal screen width. A value of 0 will cause matches to be displayed one per line. The default value is -1. completion-ignore-case (Off) If set to On, readline performs filename matching and completion in a case-insensitive fashion. completion-map-case (Off) If set to On, and completion-ignore-case is enabled, readline treats hyphens (-) and underscores (_) as equivalent when performing case-insensitive filename matching and completion. completion-prefix-display-length(0) The length in characters of the common prefix of a list of possible completions that is displayed without modification. When set to a value greater than zero, common prefixes longer than this value are replaced with an ellipsis when displaying possible completions. completion-query-items (100) This determines when the user is queried about viewing the number of possible completions generated by the possible-completions command. It may be set to any integer value greater than or equal to zero. If the number of possible completions is greater than or equal to the value of this variable, readline will ask whether or not the user wishes to view them; otherwise they are simply listed on the terminal. A zero value means readline should never ask; negative values are treated as zero. convert-meta (On) If set to On, readline will convert characters with the eighth bit set to an ASCII key sequence by stripping the eighth bit and prefixing an escape character (in effect, using escape as the meta prefix). The default is On, but readline will set it to Off if the locale contains eight- bit characters. This variable is dependent on the LC_CTYPE locale category, and may change if the locale is changed. disable-completion (Off) If set to On, readline will inhibit word completion. Completion characters will be inserted into the line as if they had been mapped to self-insert. echo-control-characters (On) When set to On, on operating systems that indicate they support it, readline echoes a character corresponding to a signal generated from the keyboard. editing-mode (emacs) Controls whether readline begins with a set of key bindings similar to Emacs or vi. editing-mode can be set to either emacs or vi. emacs-mode-string (@) If the show-mode-in-prompt variable is enabled, this string is displayed immediately before the last line of the primary prompt when emacs editing mode is active. The value is expanded like a key binding, so the standard set of meta- and control prefixes and backslash escape sequences is available. Use the \1 and \2 escapes to begin and end sequences of non-printing characters, which can be used to embed a terminal control sequence into the mode string. enable-active-region (On) The point is the current cursor position, and mark refers to a saved cursor position. The text between the point and mark is referred to as the region. When this variable is set to On, readline allows certain commands to designate the region as active. When the region is active, readline highlights the text in the region using the value of the active-region-start-color, which defaults to the string that enables the terminal's standout mode. The active region shows the text inserted by bracketed- paste and any matching text found by incremental and non- incremental history searches. enable-bracketed-paste (On) When set to On, readline configures the terminal to insert each paste into the editing buffer as a single string of characters, instead of treating each character as if it had been read from the keyboard. This prevents readline from executing any editing commands bound to key sequences appearing in the pasted text. enable-keypad (Off) When set to On, readline will try to enable the application keypad when it is called. Some systems need this to enable the arrow keys. enable-meta-key (On) When set to On, readline will try to enable any meta modifier key the terminal claims to support when it is called. On many terminals, the meta key is used to send eight-bit characters. expand-tilde (Off) If set to On, tilde expansion is performed when readline attempts word completion. history-preserve-point (Off) If set to On, the history code attempts to place point at the same location on each history line retrieved with previous-history or next-history. history-size (unset) Set the maximum number of history entries saved in the history list. If set to zero, any existing history entries are deleted and no new entries are saved. If set to a value less than zero, the number of history entries is not limited. By default, the number of history entries is set to the value of the HISTSIZE shell variable. If an attempt is made to set history-size to a non-numeric value, the maximum number of history entries will be set to 500. horizontal-scroll-mode (Off) When set to On, makes readline use a single line for display, scrolling the input horizontally on a single screen line when it becomes longer than the screen width rather than wrapping to a new line. This setting is automatically enabled for terminals of height 1. input-meta (Off) If set to On, readline will enable eight-bit input (that is, it will not strip the eighth bit from the characters it reads), regardless of what the terminal claims it can support. The name meta-flag is a synonym for this variable. The default is Off, but readline will set it to On if the locale contains eight-bit characters. This variable is dependent on the LC_CTYPE locale category, and may change if the locale is changed. isearch-terminators (``C-[C-J'') The string of characters that should terminate an incremental search without subsequently executing the character as a command. If this variable has not been given a value, the characters ESC and C-J will terminate an incremental search. keymap (emacs) Set the current readline keymap. The set of valid keymap names is emacs, emacs-standard, emacs-meta, emacs-ctlx, vi, vi-command, and vi-insert. vi is equivalent to vi-command; emacs is equivalent to emacs-standard. The default value is emacs; the value of editing-mode also affects the default keymap. keyseq-timeout (500) Specifies the duration readline will wait for a character when reading an ambiguous key sequence (one that can form a complete key sequence using the input read so far, or can take additional input to complete a longer key sequence). If no input is received within the timeout, readline will use the shorter but complete key sequence. The value is specified in milliseconds, so a value of 1000 means that readline will wait one second for additional input. If this variable is set to a value less than or equal to zero, or to a non-numeric value, readline will wait until another key is pressed to decide which key sequence to complete. mark-directories (On) If set to On, completed directory names have a slash appended. mark-modified-lines (Off) If set to On, history lines that have been modified are displayed with a preceding asterisk (*). mark-symlinked-directories (Off) If set to On, completed names which are symbolic links to directories have a slash appended (subject to the value of mark-directories). match-hidden-files (On) This variable, when set to On, causes readline to match files whose names begin with a `.' (hidden files) when performing filename completion. If set to Off, the leading `.' must be supplied by the user in the filename to be completed. menu-complete-display-prefix (Off) If set to On, menu completion displays the common prefix of the list of possible completions (which may be empty) before cycling through the list. output-meta (Off) If set to On, readline will display characters with the eighth bit set directly rather than as a meta-prefixed escape sequence. The default is Off, but readline will set it to On if the locale contains eight-bit characters. This variable is dependent on the LC_CTYPE locale category, and may change if the locale is changed. page-completions (On) If set to On, readline uses an internal more-like pager to display a screenful of possible completions at a time. print-completions-horizontally (Off) If set to On, readline will display completions with matches sorted horizontally in alphabetical order, rather than down the screen. revert-all-at-newline (Off) If set to On, readline will undo all changes to history lines before returning when accept-line is executed. By default, history lines may be modified and retain individual undo lists across calls to readline. show-all-if-ambiguous (Off) This alters the default behavior of the completion functions. If set to On, words which have more than one possible completion cause the matches to be listed immediately instead of ringing the bell. show-all-if-unmodified (Off) This alters the default behavior of the completion functions in a fashion similar to show-all-if-ambiguous. If set to On, words which have more than one possible completion without any possible partial completion (the possible completions don't share a common prefix) cause the matches to be listed immediately instead of ringing the bell. show-mode-in-prompt (Off) If set to On, add a string to the beginning of the prompt indicating the editing mode: emacs, vi command, or vi insertion. The mode strings are user-settable (e.g., emacs-mode-string). skip-completed-text (Off) If set to On, this alters the default completion behavior when inserting a single match into the line. It's only active when performing completion in the middle of a word. If enabled, readline does not insert characters from the completion that match characters after point in the word being completed, so portions of the word following the cursor are not duplicated. vi-cmd-mode-string ((cmd)) If the show-mode-in-prompt variable is enabled, this string is displayed immediately before the last line of the primary prompt when vi editing mode is active and in command mode. The value is expanded like a key binding, so the standard set of meta- and control prefixes and backslash escape sequences is available. Use the \1 and \2 escapes to begin and end sequences of non-printing characters, which can be used to embed a terminal control sequence into the mode string. vi-ins-mode-string ((ins)) If the show-mode-in-prompt variable is enabled, this string is displayed immediately before the last line of the primary prompt when vi editing mode is active and in insertion mode. The value is expanded like a key binding, so the standard set of meta- and control prefixes and backslash escape sequences is available. Use the \1 and \2 escapes to begin and end sequences of non-printing characters, which can be used to embed a terminal control sequence into the mode string. visible-stats (Off) If set to On, a character denoting a file's type as reported by stat(2) is appended to the filename when listing possible completions. Readline Conditional Constructs Readline implements a facility similar in spirit to the conditional compilation features of the C preprocessor which allows key bindings and variable settings to be performed as the result of tests. There are four parser directives used. $if The $if construct allows bindings to be made based on the editing mode, the terminal being used, or the application using readline. The text of the test, after any comparison operator, extends to the end of the line; unless otherwise noted, no characters are required to isolate it. mode The mode= form of the $if directive is used to test whether readline is in emacs or vi mode. This may be used in conjunction with the set keymap command, for instance, to set bindings in the emacs-standard and emacs-ctlx keymaps only if readline is starting out in emacs mode. term The term= form may be used to include terminal- specific key bindings, perhaps to bind the key sequences output by the terminal's function keys. The word on the right side of the = is tested against both the full name of the terminal and the portion of the terminal name before the first -. This allows sun to match both sun and sun-cmd, for instance. version The version test may be used to perform comparisons against specific readline versions. The version expands to the current readline version. The set of comparison operators includes =, (and ==), !=, <=, >=, <, and >. The version number supplied on the right side of the operator consists of a major version number, an optional decimal point, and an optional minor version (e.g., 7.1). If the minor version is omitted, it is assumed to be 0. The operator may be separated from the string version and from the version number argument by whitespace. application The application construct is used to include application-specific settings. Each program using the readline library sets the application name, and an initialization file can test for a particular value. This could be used to bind key sequences to functions useful for a specific program. For instance, the following command adds a key sequence that quotes the current or previous word in bash: $if Bash # Quote the current or previous word "\C-xq": "\eb\"\ef\"" $endif variable The variable construct provides simple equality tests for readline variables and values. The permitted comparison operators are =, ==, and !=. The variable name must be separated from the comparison operator by whitespace; the operator may be separated from the value on the right hand side by whitespace. Both string and boolean variables may be tested. Boolean variables must be tested against the values on and off. $endif This command, as seen in the previous example, terminates an $if command. $else Commands in this branch of the $if directive are executed if the test fails. $include This directive takes a single filename as an argument and reads commands and bindings from that file. For example, the following directive would read /etc/inputrc: $include /etc/inputrc Searching Readline provides commands for searching through the command history (see HISTORY below) for lines containing a specified string. There are two search modes: incremental and non- incremental. Incremental searches begin before the user has finished typing the search string. As each character of the search string is typed, readline displays the next entry from the history matching the string typed so far. An incremental search requires only as many characters as needed to find the desired history entry. The characters present in the value of the isearch-terminators variable are used to terminate an incremental search. If that variable has not been assigned a value the Escape and Control-J characters will terminate an incremental search. Control-G will abort an incremental search and restore the original line. When the search is terminated, the history entry containing the search string becomes the current line. To find other matching entries in the history list, type Control- S or Control-R as appropriate. This will search backward or forward in the history for the next entry matching the search string typed so far. Any other key sequence bound to a readline command will terminate the search and execute that command. For instance, a newline will terminate the search and accept the line, thereby executing the command from the history list. Readline remembers the last incremental search string. If two Control-Rs are typed without any intervening characters defining a new search string, any remembered search string is used. Non-incremental searches read the entire search string before starting to search for matching history lines. The search string may be typed by the user or be part of the contents of the current line. Readline Command Names The following is a list of the names of the commands and the default key sequences to which they are bound. Command names without an accompanying key sequence are unbound by default. In the following descriptions, point refers to the current cursor position, and mark refers to a cursor position saved by the set-mark command. The text between the point and mark is referred to as the region. Commands for Moving beginning-of-line (C-a) Move to the start of the current line. end-of-line (C-e) Move to the end of the line. forward-char (C-f) Move forward a character. backward-char (C-b) Move back a character. forward-word (M-f) Move forward to the end of the next word. Words are composed of alphanumeric characters (letters and digits). backward-word (M-b) Move back to the start of the current or previous word. Words are composed of alphanumeric characters (letters and digits). shell-forward-word Move forward to the end of the next word. Words are delimited by non-quoted shell metacharacters. shell-backward-word Move back to the start of the current or previous word. Words are delimited by non-quoted shell metacharacters. previous-screen-line Attempt to move point to the same physical screen column on the previous physical screen line. This will not have the desired effect if the current readline line does not take up more than one physical line or if point is not greater than the length of the prompt plus the screen width. next-screen-line Attempt to move point to the same physical screen column on the next physical screen line. This will not have the desired effect if the current readline line does not take up more than one physical line or if the length of the current readline line is not greater than the length of the prompt plus the screen width. clear-display (M-C-l) Clear the screen and, if possible, the terminal's scrollback buffer, then redraw the current line, leaving the current line at the top of the screen. clear-screen (C-l) Clear the screen, then redraw the current line, leaving the current line at the top of the screen. With an argument, refresh the current line without clearing the screen. redraw-current-line Refresh the current line. Commands for Manipulating the History accept-line (Newline, Return) Accept the line regardless of where the cursor is. If this line is non-empty, add it to the history list according to the state of the HISTCONTROL variable. If the line is a modified history line, then restore the history line to its original state. previous-history (C-p) Fetch the previous command from the history list, moving back in the list. next-history (C-n) Fetch the next command from the history list, moving forward in the list. beginning-of-history (M-<) Move to the first line in the history. end-of-history (M->) Move to the end of the input history, i.e., the line currently being entered. operate-and-get-next (C-o) Accept the current line for execution and fetch the next line relative to the current line from the history for editing. A numeric argument, if supplied, specifies the history entry to use instead of the current line. fetch-history With a numeric argument, fetch that entry from the history list and make it the current line. Without an argument, move back to the first entry in the history list. reverse-search-history (C-r) Search backward starting at the current line and moving `up' through the history as necessary. This is an incremental search. forward-search-history (C-s) Search forward starting at the current line and moving `down' through the history as necessary. This is an incremental search. non-incremental-reverse-search-history (M-p) Search backward through the history starting at the current line using a non-incremental search for a string supplied by the user. non-incremental-forward-search-history (M-n) Search forward through the history using a non-incremental search for a string supplied by the user. history-search-forward Search forward through the history for the string of characters between the start of the current line and the point. This is a non-incremental search. history-search-backward Search backward through the history for the string of characters between the start of the current line and the point. This is a non-incremental search. history-substring-search-backward Search backward through the history for the string of characters between the start of the current line and the current cursor position (the point). The search string may match anywhere in a history line. This is a non- incremental search. history-substring-search-forward Search forward through the history for the string of characters between the start of the current line and the point. The search string may match anywhere in a history line. This is a non-incremental search. yank-nth-arg (M-C-y) Insert the first argument to the previous command (usually the second word on the previous line) at point. With an argument n, insert the nth word from the previous command (the words in the previous command begin with word 0). A negative argument inserts the nth word from the end of the previous command. Once the argument n is computed, the argument is extracted as if the "!n" history expansion had been specified. yank-last-arg (M-., M-_) Insert the last argument to the previous command (the last word of the previous history entry). With a numeric argument, behave exactly like yank-nth-arg. Successive calls to yank-last-arg move back through the history list, inserting the last word (or the word specified by the argument to the first call) of each line in turn. Any numeric argument supplied to these successive calls determines the direction to move through the history. A negative argument switches the direction through the history (back or forward). The history expansion facilities are used to extract the last word, as if the "!$" history expansion had been specified. shell-expand-line (M-C-e) Expand the line as the shell does. This performs alias and history expansion as well as all of the shell word expansions. See HISTORY EXPANSION below for a description of history expansion. history-expand-line (M-^) Perform history expansion on the current line. See HISTORY EXPANSION below for a description of history expansion. magic-space Perform history expansion on the current line and insert a space. See HISTORY EXPANSION below for a description of history expansion. alias-expand-line Perform alias expansion on the current line. See ALIASES above for a description of alias expansion. history-and-alias-expand-line Perform history and alias expansion on the current line. insert-last-argument (M-., M-_) A synonym for yank-last-arg. edit-and-execute-command (C-x C-e) Invoke an editor on the current command line, and execute the result as shell commands. Bash attempts to invoke $VISUAL, $EDITOR, and emacs as the editor, in that order. Commands for Changing Text end-of-file (usually C-d) The character indicating end-of-file as set, for example, by ``stty''. If this character is read when there are no characters on the line, and point is at the beginning of the line, readline interprets it as the end of input and returns EOF. delete-char (C-d) Delete the character at point. If this function is bound to the same character as the tty EOF character, as C-d commonly is, see above for the effects. backward-delete-char (Rubout) Delete the character behind the cursor. When given a numeric argument, save the deleted text on the kill ring. forward-backward-delete-char Delete the character under the cursor, unless the cursor is at the end of the line, in which case the character behind the cursor is deleted. quoted-insert (C-q, C-v) Add the next character typed to the line verbatim. This is how to insert characters like C-q, for example. tab-insert (C-v TAB) Insert a tab character. self-insert (a, b, A, 1, !, ...) Insert the character typed. transpose-chars (C-t) Drag the character before point forward over the character at point, moving point forward as well. If point is at the end of the line, then this transposes the two characters before point. Negative arguments have no effect. transpose-words (M-t) Drag the word before point past the word after point, moving point over that word as well. If point is at the end of the line, this transposes the last two words on the line. upcase-word (M-u) Uppercase the current (or following) word. With a negative argument, uppercase the previous word, but do not move point. downcase-word (M-l) Lowercase the current (or following) word. With a negative argument, lowercase the previous word, but do not move point. capitalize-word (M-c) Capitalize the current (or following) word. With a negative argument, capitalize the previous word, but do not move point. overwrite-mode Toggle overwrite mode. With an explicit positive numeric argument, switches to overwrite mode. With an explicit non-positive numeric argument, switches to insert mode. This command affects only emacs mode; vi mode does overwrite differently. Each call to readline() starts in insert mode. In overwrite mode, characters bound to self-insert replace the text at point rather than pushing the text to the right. Characters bound to backward-delete-char replace the character before point with a space. By default, this command is unbound. Killing and Yanking kill-line (C-k) Kill the text from point to the end of the line. backward-kill-line (C-x Rubout) Kill backward to the beginning of the line. unix-line-discard (C-u) Kill backward from point to the beginning of the line. The killed text is saved on the kill-ring. kill-whole-line Kill all characters on the current line, no matter where point is. kill-word (M-d) Kill from point to the end of the current word, or if between words, to the end of the next word. Word boundaries are the same as those used by forward-word. backward-kill-word (M-Rubout) Kill the word behind point. Word boundaries are the same as those used by backward-word. shell-kill-word Kill from point to the end of the current word, or if between words, to the end of the next word. Word boundaries are the same as those used by shell-forward-word. shell-backward-kill-word Kill the word behind point. Word boundaries are the same as those used by shell-backward-word. unix-word-rubout (C-w) Kill the word behind point, using white space as a word boundary. The killed text is saved on the kill-ring. unix-filename-rubout Kill the word behind point, using white space and the slash character as the word boundaries. The killed text is saved on the kill-ring. delete-horizontal-space (M-\) Delete all spaces and tabs around point. kill-region Kill the text in the current region. copy-region-as-kill Copy the text in the region to the kill buffer. copy-backward-word Copy the word before point to the kill buffer. The word boundaries are the same as backward-word. copy-forward-word Copy the word following point to the kill buffer. The word boundaries are the same as forward-word. yank (C-y) Yank the top of the kill ring into the buffer at point. yank-pop (M-y) Rotate the kill ring, and yank the new top. Only works following yank or yank-pop. Numeric Arguments digit-argument (M-0, M-1, ..., M--) Add this digit to the argument already accumulating, or start a new argument. M-- starts a negative argument. universal-argument This is another way to specify an argument. If this command is followed by one or more digits, optionally with a leading minus sign, those digits define the argument. If the command is followed by digits, executing universal-argument again ends the numeric argument, but is otherwise ignored. As a special case, if this command is immediately followed by a character that is neither a digit nor minus sign, the argument count for the next command is multiplied by four. The argument count is initially one, so executing this function the first time makes the argument count four, a second time makes the argument count sixteen, and so on. Completing complete (TAB) Attempt to perform completion on the text before point. Bash attempts completion treating the text as a variable (if the text begins with $), username (if the text begins with ~), hostname (if the text begins with @), or command (including aliases and functions) in turn. If none of these produces a match, filename completion is attempted. possible-completions (M-?) List the possible completions of the text before point. insert-completions (M-*) Insert all completions of the text before point that would have been generated by possible-completions. menu-complete Similar to complete, but replaces the word to be completed with a single match from the list of possible completions. Repeated execution of menu-complete steps through the list of possible completions, inserting each match in turn. At the end of the list of completions, the bell is rung (subject to the setting of bell-style) and the original text is restored. An argument of n moves n positions forward in the list of matches; a negative argument may be used to move backward through the list. This command is intended to be bound to TAB, but is unbound by default. menu-complete-backward Identical to menu-complete, but moves backward through the list of possible completions, as if menu-complete had been given a negative argument. This command is unbound by default. delete-char-or-list Deletes the character under the cursor if not at the beginning or end of the line (like delete-char). If at the end of the line, behaves identically to possible-completions. This command is unbound by default. complete-filename (M-/) Attempt filename completion on the text before point. possible-filename-completions (C-x /) List the possible completions of the text before point, treating it as a filename. complete-username (M-~) Attempt completion on the text before point, treating it as a username. possible-username-completions (C-x ~) List the possible completions of the text before point, treating it as a username. complete-variable (M-$) Attempt completion on the text before point, treating it as a shell variable. possible-variable-completions (C-x $) List the possible completions of the text before point, treating it as a shell variable. complete-hostname (M-@) Attempt completion on the text before point, treating it as a hostname. possible-hostname-completions (C-x @) List the possible completions of the text before point, treating it as a hostname. complete-command (M-!) Attempt completion on the text before point, treating it as a command name. Command completion attempts to match the text against aliases, reserved words, shell functions, shell builtins, and finally executable filenames, in that order. possible-command-completions (C-x !) List the possible completions of the text before point, treating it as a command name. dynamic-complete-history (M-TAB) Attempt completion on the text before point, comparing the text against lines from the history list for possible completion matches. dabbrev-expand Attempt menu completion on the text before point, comparing the text against lines from the history list for possible completion matches. complete-into-braces (M-{) Perform filename completion and insert the list of possible completions enclosed within braces so the list is available to the shell (see Brace Expansion above). Keyboard Macros start-kbd-macro (C-x () Begin saving the characters typed into the current keyboard macro. end-kbd-macro (C-x )) Stop saving the characters typed into the current keyboard macro and store the definition. call-last-kbd-macro (C-x e) Re-execute the last keyboard macro defined, by making the characters in the macro appear as if typed at the keyboard. print-last-kbd-macro () Print the last keyboard macro defined in a format suitable for the inputrc file. Miscellaneous re-read-init-file (C-x C-r) Read in the contents of the inputrc file, and incorporate any bindings or variable assignments found there. abort (C-g) Abort the current editing command and ring the terminal's bell (subject to the setting of bell-style). do-lowercase-version (M-A, M-B, M-x, ...) If the metafied character x is uppercase, run the command that is bound to the corresponding metafied lowercase character. The behavior is undefined if x is already lowercase. prefix-meta (ESC) Metafy the next character typed. ESC f is equivalent to Meta-f. undo (C-_, C-x C-u) Incremental undo, separately remembered for each line. revert-line (M-r) Undo all changes made to this line. This is like executing the undo command enough times to return the line to its initial state. tilde-expand (M-&) Perform tilde expansion on the current word. set-mark (C-@, M-<space>) Set the mark to the point. If a numeric argument is supplied, the mark is set to that position. exchange-point-and-mark (C-x C-x) Swap the point with the mark. The current cursor position is set to the saved position, and the old cursor position is saved as the mark. character-search (C-]) A character is read and point is moved to the next occurrence of that character. A negative argument searches for previous occurrences. character-search-backward (M-C-]) A character is read and point is moved to the previous occurrence of that character. A negative argument searches for subsequent occurrences. skip-csi-sequence Read enough characters to consume a multi-key sequence such as those defined for keys like Home and End. Such sequences begin with a Control Sequence Indicator (CSI), usually ESC-[. If this sequence is bound to "\[", keys producing such sequences will have no effect unless explicitly bound to a readline command, instead of inserting stray characters into the editing buffer. This is unbound by default, but usually bound to ESC-[. insert-comment (M-#) Without a numeric argument, the value of the readline comment-begin variable is inserted at the beginning of the current line. If a numeric argument is supplied, this command acts as a toggle: if the characters at the beginning of the line do not match the value of comment-begin, the value is inserted, otherwise the characters in comment-begin are deleted from the beginning of the line. In either case, the line is accepted as if a newline had been typed. The default value of comment-begin causes this command to make the current line a shell comment. If a numeric argument causes the comment character to be removed, the line will be executed by the shell. spell-correct-word (C-x s) Perform spelling correction on the current word, treating it as a directory or filename, in the same way as the cdspell shell option. Word boundaries are the same as those used by shell-forward-word. glob-complete-word (M-g) The word before point is treated as a pattern for pathname expansion, with an asterisk implicitly appended. This pattern is used to generate a list of matching filenames for possible completions. glob-expand-word (C-x *) The word before point is treated as a pattern for pathname expansion, and the list of matching filenames is inserted, replacing the word. If a numeric argument is supplied, an asterisk is appended before pathname expansion. glob-list-expansions (C-x g) The list of expansions that would have been generated by glob-expand-word is displayed, and the line is redrawn. If a numeric argument is supplied, an asterisk is appended before pathname expansion. dump-functions Print all of the functions and their key bindings to the readline output stream. If a numeric argument is supplied, the output is formatted in such a way that it can be made part of an inputrc file. dump-variables Print all of the settable readline variables and their values to the readline output stream. If a numeric argument is supplied, the output is formatted in such a way that it can be made part of an inputrc file. dump-macros Print all of the readline key sequences bound to macros and the strings they output. If a numeric argument is supplied, the output is formatted in such a way that it can be made part of an inputrc file. display-shell-version (C-x C-v) Display version information about the current instance of bash. Programmable Completion When word completion is attempted for an argument to a command for which a completion specification (a compspec) has been defined using the complete builtin (see SHELL BUILTIN COMMANDS below), the programmable completion facilities are invoked. First, the command name is identified. If the command word is the empty string (completion attempted at the beginning of an empty line), any compspec defined with the -E option to complete is used. If a compspec has been defined for that command, the compspec is used to generate the list of possible completions for the word. If the command word is a full pathname, a compspec for the full pathname is searched for first. If no compspec is found for the full pathname, an attempt is made to find a compspec for the portion following the final slash. If those searches do not result in a compspec, any compspec defined with the -D option to complete is used as the default. If there is no default compspec, bash attempts alias expansion on the command word as a final resort, and attempts to find a compspec for the command word from any successful expansion. Once a compspec has been found, it is used to generate the list of matching words. If a compspec is not found, the default bash completion as described above under Completing is performed. First, the actions specified by the compspec are used. Only matches which are prefixed by the word being completed are returned. When the -f or -d option is used for filename or directory name completion, the shell variable FIGNORE is used to filter the matches. Any completions specified by a pathname expansion pattern to the -G option are generated next. The words generated by the pattern need not match the word being completed. The GLOBIGNORE shell variable is not used to filter the matches, but the FIGNORE variable is used. Next, the string specified as the argument to the -W option is considered. The string is first split using the characters in the IFS special variable as delimiters. Shell quoting is honored. Each word is then expanded using brace expansion, tilde expansion, parameter and variable expansion, command substitution, and arithmetic expansion, as described above under EXPANSION. The results are split using the rules described above under Word Splitting. The results of the expansion are prefix- matched against the word being completed, and the matching words become the possible completions. After these matches have been generated, any shell function or command specified with the -F and -C options is invoked. When the command or function is invoked, the COMP_LINE, COMP_POINT, COMP_KEY, and COMP_TYPE variables are assigned values as described above under Shell Variables. If a shell function is being invoked, the COMP_WORDS and COMP_CWORD variables are also set. When the function or command is invoked, the first argument ($1) is the name of the command whose arguments are being completed, the second argument ($2) is the word being completed, and the third argument ($3) is the word preceding the word being completed on the current command line. No filtering of the generated completions against the word being completed is performed; the function or command has complete freedom in generating the matches. Any function specified with -F is invoked first. The function may use any of the shell facilities, including the compgen builtin described below, to generate the matches. It must put the possible completions in the COMPREPLY array variable, one per array element. Next, any command specified with the -C option is invoked in an environment equivalent to command substitution. It should print a list of completions, one per line, to the standard output. Backslash may be used to escape a newline, if necessary. After all of the possible completions are generated, any filter specified with the -X option is applied to the list. The filter is a pattern as used for pathname expansion; a & in the pattern is replaced with the text of the word being completed. A literal & may be escaped with a backslash; the backslash is removed before attempting a match. Any completion that matches the pattern will be removed from the list. A leading ! negates the pattern; in this case any completion not matching the pattern will be removed. If the nocasematch shell option is enabled, the match is performed without regard to the case of alphabetic characters. Finally, any prefix and suffix specified with the -P and -S options are added to each member of the completion list, and the result is returned to the readline completion code as the list of possible completions. If the previously-applied actions do not generate any matches, and the -o dirnames option was supplied to complete when the compspec was defined, directory name completion is attempted. If the -o plusdirs option was supplied to complete when the compspec was defined, directory name completion is attempted and any matches are added to the results of the other actions. By default, if a compspec is found, whatever it generates is returned to the completion code as the full set of possible completions. The default bash completions are not attempted, and the readline default of filename completion is disabled. If the -o bashdefault option was supplied to complete when the compspec was defined, the bash default completions are attempted if the compspec generates no matches. If the -o default option was supplied to complete when the compspec was defined, readline's default completion will be performed if the compspec (and, if attempted, the default bash completions) generate no matches. When a compspec indicates that directory name completion is desired, the programmable completion functions force readline to append a slash to completed names which are symbolic links to directories, subject to the value of the mark-directories readline variable, regardless of the setting of the mark- symlinked-directories readline variable. There is some support for dynamically modifying completions. This is most useful when used in combination with a default completion specified with complete -D. It's possible for shell functions executed as completion handlers to indicate that completion should be retried by returning an exit status of 124. If a shell function returns 124, and changes the compspec associated with the command on which completion is being attempted (supplied as the first argument when the function is executed), programmable completion restarts from the beginning, with an attempt to find a new compspec for that command. This allows a set of completions to be built dynamically as completion is attempted, rather than being loaded all at once. For instance, assuming that there is a library of compspecs, each kept in a file corresponding to the name of the command, the following default completion function would load completions dynamically: _completion_loader() { . "/etc/bash_completion.d/$1.sh" >/dev/null 2>&1 && return 124 } complete -D -F _completion_loader -o bashdefault -o default HISTORY top When the -o history option to the set builtin is enabled, the shell provides access to the command history, the list of commands previously typed. The value of the HISTSIZE variable is used as the number of commands to save in a history list. The text of the last HISTSIZE commands (default 500) is saved. The shell stores each command in the history list prior to parameter and variable expansion (see EXPANSION above) but after history expansion is performed, subject to the values of the shell variables HISTIGNORE and HISTCONTROL. On startup, the history is initialized from the file named by the variable HISTFILE (default ~/.bash_history). The file named by the value of HISTFILE is truncated, if necessary, to contain no more than the number of lines specified by the value of HISTFILESIZE. If HISTFILESIZE is unset, or set to null, a non- numeric value, or a numeric value less than zero, the history file is not truncated. When the history file is read, lines beginning with the history comment character followed immediately by a digit are interpreted as timestamps for the following history line. These timestamps are optionally displayed depending on the value of the HISTTIMEFORMAT variable. When a shell with history enabled exits, the last $HISTSIZE lines are copied from the history list to $HISTFILE. If the histappend shell option is enabled (see the description of shopt under SHELL BUILTIN COMMANDS below), the lines are appended to the history file, otherwise the history file is overwritten. If HISTFILE is unset, or if the history file is unwritable, the history is not saved. If the HISTTIMEFORMAT variable is set, time stamps are written to the history file, marked with the history comment character, so they may be preserved across shell sessions. This uses the history comment character to distinguish timestamps from other history lines. After saving the history, the history file is truncated to contain no more than HISTFILESIZE lines. If HISTFILESIZE is unset, or set to null, a non-numeric value, or a numeric value less than zero, the history file is not truncated. The builtin command fc (see SHELL BUILTIN COMMANDS below) may be used to list or edit and re-execute a portion of the history list. The history builtin may be used to display or modify the history list and manipulate the history file. When using command-line editing, search commands are available in each editing mode that provide access to the history list. The shell allows control over which commands are saved on the history list. The HISTCONTROL and HISTIGNORE variables may be set to cause the shell to save only a subset of the commands entered. The cmdhist shell option, if enabled, causes the shell to attempt to save each line of a multi-line command in the same history entry, adding semicolons where necessary to preserve syntactic correctness. The lithist shell option causes the shell to save the command with embedded newlines instead of semicolons. See the description of the shopt builtin below under SHELL BUILTIN COMMANDS for information on setting and unsetting shell options. HISTORY EXPANSION top The shell supports a history expansion feature that is similar to the history expansion in csh. This section describes what syntax features are available. This feature is enabled by default for interactive shells, and can be disabled using the +H option to the set builtin command (see SHELL BUILTIN COMMANDS below). Non- interactive shells do not perform history expansion by default. History expansions introduce words from the history list into the input stream, making it easy to repeat commands, insert the arguments to a previous command into the current input line, or fix errors in previous commands quickly. History expansion is performed immediately after a complete line is read, before the shell breaks it into words, and is performed on each line individually without taking quoting on previous lines into account. It takes place in two parts. The first is to determine which line from the history list to use during substitution. The second is to select portions of that line for inclusion into the current one. The line selected from the history is the event, and the portions of that line that are acted upon are words. Various modifiers are available to manipulate the selected words. The line is broken into words in the same fashion as when reading input, so that several metacharacter-separated words surrounded by quotes are considered one word. History expansions are introduced by the appearance of the history expansion character, which is ! by default. Only backslash (\) and single quotes can quote the history expansion character, but the history expansion character is also treated as quoted if it immediately precedes the closing double quote in a double-quoted string. Several characters inhibit history expansion if found immediately following the history expansion character, even if it is unquoted: space, tab, newline, carriage return, and =. If the extglob shell option is enabled, ( will also inhibit expansion. Several shell options settable with the shopt builtin may be used to tailor the behavior of history expansion. If the histverify shell option is enabled (see the description of the shopt builtin below), and readline is being used, history substitutions are not immediately passed to the shell parser. Instead, the expanded line is reloaded into the readline editing buffer for further modification. If readline is being used, and the histreedit shell option is enabled, a failed history substitution will be reloaded into the readline editing buffer for correction. The -p option to the history builtin command may be used to see what a history expansion will do before using it. The -s option to the history builtin may be used to add commands to the end of the history list without actually executing them, so that they are available for subsequent recall. The shell allows control of the various characters used by the history expansion mechanism (see the description of histchars above under Shell Variables). The shell uses the history comment character to mark history timestamps when writing the history file. Event Designators An event designator is a reference to a command line entry in the history list. Unless the reference is absolute, events are relative to the current position in the history list. ! Start a history substitution, except when followed by a blank, newline, carriage return, = or ( (when the extglob shell option is enabled using the shopt builtin). !n Refer to command line n. !-n Refer to the current command minus n. !! Refer to the previous command. This is a synonym for `!-1'. !string Refer to the most recent command preceding the current position in the history list starting with string. !?string[?] Refer to the most recent command preceding the current position in the history list containing string. The trailing ? may be omitted if string is followed immediately by a newline. If string is missing, the string from the most recent search is used; it is an error if there is no previous search string. ^string1^string2^ Quick substitution. Repeat the previous command, replacing string1 with string2. Equivalent to ``!!:s^string1^string2^'' (see Modifiers below). !# The entire command line typed so far. Word Designators Word designators are used to select desired words from the event. A : separates the event specification from the word designator. It may be omitted if the word designator begins with a ^, $, *, -, or %. Words are numbered from the beginning of the line, with the first word being denoted by 0 (zero). Words are inserted into the current line separated by single spaces. 0 (zero) The zeroth word. For the shell, this is the command word. n The nth word. ^ The first argument. That is, word 1. $ The last word. This is usually the last argument, but will expand to the zeroth word if there is only one word in the line. % The first word matched by the most recent `?string?' search, if the search string begins with a character that is part of a word. x-y A range of words; `-y' abbreviates `0-y'. * All of the words but the zeroth. This is a synonym for `1-$'. It is not an error to use * if there is just one word in the event; the empty string is returned in that case. x* Abbreviates x-$. x- Abbreviates x-$ like x*, but omits the last word. If x is missing, it defaults to 0. If a word designator is supplied without an event specification, the previous command is used as the event. Modifiers After the optional word designator, there may appear a sequence of one or more of the following modifiers, each preceded by a `:'. These modify, or edit, the word or words selected from the history event. h Remove a trailing filename component, leaving only the head. t Remove all leading filename components, leaving the tail. r Remove a trailing suffix of the form .xxx, leaving the basename. e Remove all but the trailing suffix. p Print the new command but do not execute it. q Quote the substituted words, escaping further substitutions. x Quote the substituted words as with q, but break into words at blanks and newlines. The q and x modifiers are mutually exclusive; the last one supplied is used. s/old/new/ Substitute new for the first occurrence of old in the event line. Any character may be used as the delimiter in place of /. The final delimiter is optional if it is the last character of the event line. The delimiter may be quoted in old and new with a single backslash. If & appears in new, it is replaced by old. A single backslash will quote the &. If old is null, it is set to the last old substituted, or, if no previous history substitutions took place, the last string in a !?string[?] search. If new is null, each matching old is deleted. & Repeat the previous substitution. g Cause changes to be applied over the entire event line. This is used in conjunction with `:s' (e.g., `:gs/old/new/') or `:&'. If used with `:s', any delimiter can be used in place of /, and the final delimiter is optional if it is the last character of the event line. An a may be used as a synonym for g. G Apply the following `s' or `&' modifier once to each word in the event line. SHELL BUILTIN COMMANDS top Unless otherwise noted, each builtin command documented in this section as accepting options preceded by - accepts -- to signify the end of the options. The :, true, false, and test/[ builtins do not accept options and do not treat -- specially. The exit, logout, return, break, continue, let, and shift builtins accept and process arguments beginning with - without requiring --. Other builtins that accept arguments but are not specified as accepting options interpret arguments beginning with - as invalid options and require -- to prevent this interpretation. : [arguments] No effect; the command does nothing beyond expanding arguments and performing any specified redirections. The return status is zero. . filename [arguments] source filename [arguments] Read and execute commands from filename in the current shell environment and return the exit status of the last command executed from filename. If filename does not contain a slash, filenames in PATH are used to find the directory containing filename, but filename does not need to be executable. The file searched for in PATH need not be executable. When bash is not in posix mode, it searches the current directory if no file is found in PATH. If the sourcepath option to the shopt builtin command is turned off, the PATH is not searched. If any arguments are supplied, they become the positional parameters when filename is executed. Otherwise the positional parameters are unchanged. If the -T option is enabled, . inherits any trap on DEBUG; if it is not, any DEBUG trap string is saved and restored around the call to ., and . unsets the DEBUG trap while it executes. If -T is not set, and the sourced file changes the DEBUG trap, the new value is retained when . completes. The return status is the status of the last command exited within the script (0 if no commands are executed), and false if filename is not found or cannot be read. alias [-p] [name[=value] ...] Alias with no arguments or with the -p option prints the list of aliases in the form alias name=value on standard output. When arguments are supplied, an alias is defined for each name whose value is given. A trailing space in value causes the next word to be checked for alias substitution when the alias is expanded. For each name in the argument list for which no value is supplied, the name and value of the alias is printed. Alias returns true unless a name is given for which no alias has been defined. bg [jobspec ...] Resume each suspended job jobspec in the background, as if it had been started with &. If jobspec is not present, the shell's notion of the current job is used. bg jobspec returns 0 unless run when job control is disabled or, when run with job control enabled, any specified jobspec was not found or was started without job control. bind [-m keymap] [-lpsvPSVX] bind [-m keymap] [-q function] [-u function] [-r keyseq] bind [-m keymap] -f filename bind [-m keymap] -x keyseq:shell-command bind [-m keymap] keyseq:function-name bind [-m keymap] keyseq:readline-command bind readline-command-line Display current readline key and function bindings, bind a key sequence to a readline function or macro, or set a readline variable. Each non-option argument is a command as it would appear in a readline initialization file such as .inputrc, but each binding or command must be passed as a separate argument; e.g., '"\C-x\C-r": re-read-init-file'. Options, if supplied, have the following meanings: -m keymap Use keymap as the keymap to be affected by the subsequent bindings. Acceptable keymap names are emacs, emacs-standard, emacs-meta, emacs-ctlx, vi, vi-move, vi-command, and vi-insert. vi is equivalent to vi-command (vi-move is also a synonym); emacs is equivalent to emacs-standard. -l List the names of all readline functions. -p Display readline function names and bindings in such a way that they can be re-read. -P List current readline function names and bindings. -s Display readline key sequences bound to macros and the strings they output in such a way that they can be re-read. -S Display readline key sequences bound to macros and the strings they output. -v Display readline variable names and values in such a way that they can be re-read. -V List current readline variable names and values. -f filename Read key bindings from filename. -q function Query about which keys invoke the named function. -u function Unbind all keys bound to the named function. -r keyseq Remove any current binding for keyseq. -x keyseq:shell-command Cause shell-command to be executed whenever keyseq is entered. When shell-command is executed, the shell sets the READLINE_LINE variable to the contents of the readline line buffer and the READLINE_POINT and READLINE_MARK variables to the current location of the insertion point and the saved insertion point (the mark), respectively. The shell assigns any numeric argument the user supplied to the READLINE_ARGUMENT variable. If there was no argument, that variable is not set. If the executed command changes the value of any of READLINE_LINE, READLINE_POINT, or READLINE_MARK, those new values will be reflected in the editing state. -X List all key sequences bound to shell commands and the associated commands in a format that can be reused as input. The return value is 0 unless an unrecognized option is given or an error occurred. break [n] Exit from within a for, while, until, or select loop. If n is specified, break n levels. n must be 1. If n is greater than the number of enclosing loops, all enclosing loops are exited. The return value is 0 unless n is not greater than or equal to 1. builtin shell-builtin [arguments] Execute the specified shell builtin, passing it arguments, and return its exit status. This is useful when defining a function whose name is the same as a shell builtin, retaining the functionality of the builtin within the function. The cd builtin is commonly redefined this way. The return status is false if shell-builtin is not a shell builtin command. caller [expr] Returns the context of any active subroutine call (a shell function or a script executed with the . or source builtins). Without expr, caller displays the line number and source filename of the current subroutine call. If a non-negative integer is supplied as expr, caller displays the line number, subroutine name, and source file corresponding to that position in the current execution call stack. This extra information may be used, for example, to print a stack trace. The current frame is frame 0. The return value is 0 unless the shell is not executing a subroutine call or expr does not correspond to a valid position in the call stack. cd [-L|[-P [-e]] [-@]] [dir] Change the current directory to dir. if dir is not supplied, the value of the HOME shell variable is the default. The variable CDPATH defines the search path for the directory containing dir: each directory name in CDPATH is searched for dir. Alternative directory names in CDPATH are separated by a colon (:). A null directory name in CDPATH is the same as the current directory, i.e., ``.''. If dir begins with a slash (/), then CDPATH is not used. The -P option causes cd to use the physical directory structure by resolving symbolic links while traversing dir and before processing instances of .. in dir (see also the -P option to the set builtin command); the -L option forces symbolic links to be followed by resolving the link after processing instances of .. in dir. If .. appears in dir, it is processed by removing the immediately previous pathname component from dir, back to a slash or the beginning of dir. If the -e option is supplied with -P, and the current working directory cannot be successfully determined after a successful directory change, cd will return an unsuccessful status. On systems that support it, the -@ option presents the extended attributes associated with a file as a directory. An argument of - is converted to $OLDPWD before the directory change is attempted. If a non-empty directory name from CDPATH is used, or if - is the first argument, and the directory change is successful, the absolute pathname of the new working directory is written to the standard output. If the directory change is successful, cd sets the value of the PWD environment variable to the new directory name, and sets the OLDPWD environment variable to the value of the current working directory before the change. The return value is true if the directory was successfully changed; false otherwise. command [-pVv] command [arg ...] Run command with args suppressing the normal shell function lookup. Only builtin commands or commands found in the PATH are executed. If the -p option is given, the search for command is performed using a default value for PATH that is guaranteed to find all of the standard utilities. If either the -V or -v option is supplied, a description of command is printed. The -v option causes a single word indicating the command or filename used to invoke command to be displayed; the -V option produces a more verbose description. If the -V or -v option is supplied, the exit status is 0 if command was found, and 1 if not. If neither option is supplied and an error occurred or command cannot be found, the exit status is 127. Otherwise, the exit status of the command builtin is the exit status of command. compgen [option] [word] Generate possible completion matches for word according to the options, which may be any option accepted by the complete builtin with the exception of -p and -r, and write the matches to the standard output. When using the -F or -C options, the various shell variables set by the programmable completion facilities, while available, will not have useful values. The matches will be generated in the same way as if the programmable completion code had generated them directly from a completion specification with the same flags. If word is specified, only those completions matching word will be displayed. The return value is true unless an invalid option is supplied, or no matches were generated. complete [-abcdefgjksuv] [-o comp-option] [-DEI] [-A action] [-G globpat] [-W wordlist] [-F function] [-C command] [-X filterpat] [-P prefix] [-S suffix] name [name ...] complete -pr [-DEI] [name ...] Specify how arguments to each name should be completed. If the -p option is supplied, or if no options are supplied, existing completion specifications are printed in a way that allows them to be reused as input. The -r option removes a completion specification for each name, or, if no names are supplied, all completion specifications. The -D option indicates that other supplied options and actions should apply to the ``default'' command completion; that is, completion attempted on a command for which no completion has previously been defined. The -E option indicates that other supplied options and actions should apply to ``empty'' command completion; that is, completion attempted on a blank line. The -I option indicates that other supplied options and actions should apply to completion on the initial non-assignment word on the line, or after a command delimiter such as ; or |, which is usually command name completion. If multiple options are supplied, the -D option takes precedence over -E, and both take precedence over -I. If any of -D, -E, or -I are supplied, any other name arguments are ignored; these completions only apply to the case specified by the option. The process of applying these completion specifications when word completion is attempted is described above under Programmable Completion. Other options, if specified, have the following meanings. The arguments to the -G, -W, and -X options (and, if necessary, the -P and -S options) should be quoted to protect them from expansion before the complete builtin is invoked. -o comp-option The comp-option controls several aspects of the compspec's behavior beyond the simple generation of completions. comp-option may be one of: bashdefault Perform the rest of the default bash completions if the compspec generates no matches. default Use readline's default filename completion if the compspec generates no matches. dirnames Perform directory name completion if the compspec generates no matches. filenames Tell readline that the compspec generates filenames, so it can perform any filename-specific processing (like adding a slash to directory names, quoting special characters, or suppressing trailing spaces). Intended to be used with shell functions. noquote Tell readline not to quote the completed words if they are filenames (quoting filenames is the default). nosort Tell readline not to sort the list of possible completions alphabetically. nospace Tell readline not to append a space (the default) to words completed at the end of the line. plusdirs After any matches defined by the compspec are generated, directory name completion is attempted and any matches are added to the results of the other actions. -A action The action may be one of the following to generate a list of possible completions: alias Alias names. May also be specified as -a. arrayvar Array variable names. binding Readline key binding names. builtin Names of shell builtin commands. May also be specified as -b. command Command names. May also be specified as -c. directory Directory names. May also be specified as -d. disabled Names of disabled shell builtins. enabled Names of enabled shell builtins. export Names of exported shell variables. May also be specified as -e. file File names. May also be specified as -f. function Names of shell functions. group Group names. May also be specified as -g. helptopic Help topics as accepted by the help builtin. hostname Hostnames, as taken from the file specified by the HOSTFILE shell variable. job Job names, if job control is active. May also be specified as -j. keyword Shell reserved words. May also be specified as -k. running Names of running jobs, if job control is active. service Service names. May also be specified as -s. setopt Valid arguments for the -o option to the set builtin. shopt Shell option names as accepted by the shopt builtin. signal Signal names. stopped Names of stopped jobs, if job control is active. user User names. May also be specified as -u. variable Names of all shell variables. May also be specified as -v. -C command command is executed in a subshell environment, and its output is used as the possible completions. Arguments are passed as with the -F option. -F function The shell function function is executed in the current shell environment. When the function is executed, the first argument ($1) is the name of the command whose arguments are being completed, the second argument ($2) is the word being completed, and the third argument ($3) is the word preceding the word being completed on the current command line. When it finishes, the possible completions are retrieved from the value of the COMPREPLY array variable. -G globpat The pathname expansion pattern globpat is expanded to generate the possible completions. -P prefix prefix is added at the beginning of each possible completion after all other options have been applied. -S suffix suffix is appended to each possible completion after all other options have been applied. -W wordlist The wordlist is split using the characters in the IFS special variable as delimiters, and each resultant word is expanded. Shell quoting is honored within wordlist, in order to provide a mechanism for the words to contain shell metacharacters or characters in the value of IFS. The possible completions are the members of the resultant list which match the word being completed. -X filterpat filterpat is a pattern as used for pathname expansion. It is applied to the list of possible completions generated by the preceding options and arguments, and each completion matching filterpat is removed from the list. A leading ! in filterpat negates the pattern; in this case, any completion not matching filterpat is removed. The return value is true unless an invalid option is supplied, an option other than -p or -r is supplied without a name argument, an attempt is made to remove a completion specification for a name for which no specification exists, or an error occurs adding a completion specification. compopt [-o option] [-DEI] [+o option] [name] Modify completion options for each name according to the options, or for the currently-executing completion if no names are supplied. If no options are given, display the completion options for each name or the current completion. The possible values of option are those valid for the complete builtin described above. The -D option indicates that other supplied options should apply to the ``default'' command completion; that is, completion attempted on a command for which no completion has previously been defined. The -E option indicates that other supplied options should apply to ``empty'' command completion; that is, completion attempted on a blank line. The -I option indicates that other supplied options should apply to completion on the initial non-assignment word on the line, or after a command delimiter such as ; or |, which is usually command name completion. The return value is true unless an invalid option is supplied, an attempt is made to modify the options for a name for which no completion specification exists, or an output error occurs. continue [n] Resume the next iteration of the enclosing for, while, until, or select loop. If n is specified, resume at the nth enclosing loop. n must be 1. If n is greater than the number of enclosing loops, the last enclosing loop (the ``top-level'' loop) is resumed. The return value is 0 unless n is not greater than or equal to 1. declare [-aAfFgiIlnrtux] [-p] [name[=value] ...] typeset [-aAfFgiIlnrtux] [-p] [name[=value] ...] Declare variables and/or give them attributes. If no names are given then display the values of variables. The -p option will display the attributes and values of each name. When -p is used with name arguments, additional options, other than -f and -F, are ignored. When -p is supplied without name arguments, it will display the attributes and values of all variables having the attributes specified by the additional options. If no other options are supplied with -p, declare will display the attributes and values of all shell variables. The -f option will restrict the display to shell functions. The -F option inhibits the display of function definitions; only the function name and attributes are printed. If the extdebug shell option is enabled using shopt, the source file name and line number where each name is defined are displayed as well. The -F option implies -f. The -g option forces variables to be created or modified at the global scope, even when declare is executed in a shell function. It is ignored in all other cases. The -I option causes local variables to inherit the attributes (except the nameref attribute) and value of any existing variable with the same name at a surrounding scope. If there is no existing variable, the local variable is initially unset. The following options can be used to restrict output to variables with the specified attribute or to give variables attributes: -a Each name is an indexed array variable (see Arrays above). -A Each name is an associative array variable (see Arrays above). -f Use function names only. -i The variable is treated as an integer; arithmetic evaluation (see ARITHMETIC EVALUATION above) is performed when the variable is assigned a value. -l When the variable is assigned a value, all upper- case characters are converted to lower-case. The upper-case attribute is disabled. -n Give each name the nameref attribute, making it a name reference to another variable. That other variable is defined by the value of name. All references, assignments, and attribute modifications to name, except those using or changing the -n attribute itself, are performed on the variable referenced by name's value. The nameref attribute cannot be applied to array variables. -r Make names readonly. These names cannot then be assigned values by subsequent assignment statements or unset. -t Give each name the trace attribute. Traced functions inherit the DEBUG and RETURN traps from the calling shell. The trace attribute has no special meaning for variables. -u When the variable is assigned a value, all lower- case characters are converted to upper-case. The lower-case attribute is disabled. -x Mark names for export to subsequent commands via the environment. Using `+' instead of `-' turns off the attribute instead, with the exceptions that +a and +A may not be used to destroy array variables and +r will not remove the readonly attribute. When used in a function, declare and typeset make each name local, as with the local command, unless the -g option is supplied. If a variable name is followed by =value, the value of the variable is set to value. When using -a or -A and the compound assignment syntax to create array variables, additional attributes do not take effect until subsequent assignments. The return value is 0 unless an invalid option is encountered, an attempt is made to define a function using ``-f foo=bar'', an attempt is made to assign a value to a readonly variable, an attempt is made to assign a value to an array variable without using the compound assignment syntax (see Arrays above), one of the names is not a valid shell variable name, an attempt is made to turn off readonly status for a readonly variable, an attempt is made to turn off array status for an array variable, or an attempt is made to display a non-existent function with -f. dirs [-clpv] [+n] [-n] Without options, displays the list of currently remembered directories. The default display is on a single line with directory names separated by spaces. Directories are added to the list with the pushd command; the popd command removes entries from the list. The current directory is always the first directory in the stack. -c Clears the directory stack by deleting all of the entries. -l Produces a listing using full pathnames; the default listing format uses a tilde to denote the home directory. -p Print the directory stack with one entry per line. -v Print the directory stack with one entry per line, prefixing each entry with its index in the stack. +n Displays the nth entry counting from the left of the list shown by dirs when invoked without options, starting with zero. -n Displays the nth entry counting from the right of the list shown by dirs when invoked without options, starting with zero. The return value is 0 unless an invalid option is supplied or n indexes beyond the end of the directory stack. disown [-ar] [-h] [jobspec ... | pid ... ] Without options, remove each jobspec from the table of active jobs. If jobspec is not present, and neither the -a nor the -r option is supplied, the current job is used. If the -h option is given, each jobspec is not removed from the table, but is marked so that SIGHUP is not sent to the job if the shell receives a SIGHUP. If no jobspec is supplied, the -a option means to remove or mark all jobs; the -r option without a jobspec argument restricts operation to running jobs. The return value is 0 unless a jobspec does not specify a valid job. echo [-neE] [arg ...] Output the args, separated by spaces, followed by a newline. The return status is 0 unless a write error occurs. If -n is specified, the trailing newline is suppressed. If the -e option is given, interpretation of the following backslash-escaped characters is enabled. The -E option disables the interpretation of these escape characters, even on systems where they are interpreted by default. The xpg_echo shell option may be used to dynamically determine whether or not echo expands these escape characters by default. echo does not interpret -- to mean the end of options. echo interprets the following escape sequences: \a alert (bell) \b backspace \c suppress further output \e \E an escape character \f form feed \n new line \r carriage return \t horizontal tab \v vertical tab \\ backslash \0nnn the eight-bit character whose value is the octal value nnn (zero to three octal digits) \xHH the eight-bit character whose value is the hexadecimal value HH (one or two hex digits) \uHHHH the Unicode (ISO/IEC 10646) character whose value is the hexadecimal value HHHH (one to four hex digits) \UHHHHHHHH the Unicode (ISO/IEC 10646) character whose value is the hexadecimal value HHHHHHHH (one to eight hex digits) enable [-a] [-dnps] [-f filename] [name ...] Enable and disable builtin shell commands. Disabling a builtin allows a disk command which has the same name as a shell builtin to be executed without specifying a full pathname, even though the shell normally searches for builtins before disk commands. If -n is used, each name is disabled; otherwise, names are enabled. For example, to use the test binary found via the PATH instead of the shell builtin version, run ``enable -n test''. The -f option means to load the new builtin command name from shared object filename, on systems that support dynamic loading. Bash will use the value of the BASH_LOADABLES_PATH variable as a colon-separated list of directories in which to search for filename. The default is system-dependent. The -d option will delete a builtin previously loaded with -f. If no name arguments are given, or if the -p option is supplied, a list of shell builtins is printed. With no other option arguments, the list consists of all enabled shell builtins. If -n is supplied, only disabled builtins are printed. If -a is supplied, the list printed includes all builtins, with an indication of whether or not each is enabled. If -s is supplied, the output is restricted to the POSIX special builtins. If no options are supplied and a name is not a shell builtin, enable will attempt to load name from a shared object named name, as if the command were ``enable -f name name . The return value is 0 unless a name is not a shell builtin or there is an error loading a new builtin from a shared object. eval [arg ...] The args are read and concatenated together into a single command. This command is then read and executed by the shell, and its exit status is returned as the value of eval. If there are no args, or only null arguments, eval returns 0. exec [-cl] [-a name] [command [arguments]] If command is specified, it replaces the shell. No new process is created. The arguments become the arguments to command. If the -l option is supplied, the shell places a dash at the beginning of the zeroth argument passed to command. This is what login(1) does. The -c option causes command to be executed with an empty environment. If -a is supplied, the shell passes name as the zeroth argument to the executed command. If command cannot be executed for some reason, a non-interactive shell exits, unless the execfail shell option is enabled. In that case, it returns failure. An interactive shell returns failure if the file cannot be executed. A subshell exits unconditionally if exec fails. If command is not specified, any redirections take effect in the current shell, and the return status is 0. If there is a redirection error, the return status is 1. exit [n] Cause the shell to exit with a status of n. If n is omitted, the exit status is that of the last command executed. A trap on EXIT is executed before the shell terminates. export [-fn] [name[=word]] ... export -p The supplied names are marked for automatic export to the environment of subsequently executed commands. If the -f option is given, the names refer to functions. If no names are given, or if the -p option is supplied, a list of names of all exported variables is printed. The -n option causes the export property to be removed from each name. If a variable name is followed by =word, the value of the variable is set to word. export returns an exit status of 0 unless an invalid option is encountered, one of the names is not a valid shell variable name, or -f is supplied with a name that is not a function. fc [-e ename] [-lnr] [first] [last] fc -s [pat=rep] [cmd] The first form selects a range of commands from first to last from the history list and displays or edits and re- executes them. First and last may be specified as a string (to locate the last command beginning with that string) or as a number (an index into the history list, where a negative number is used as an offset from the current command number). When listing, a first or last of 0 is equivalent to -1 and -0 is equivalent to the current command (usually the fc command); otherwise 0 is equivalent to -1 and -0 is invalid. If last is not specified, it is set to the current command for listing (so that ``fc -l -10'' prints the last 10 commands) and to first otherwise. If first is not specified, it is set to the previous command for editing and -16 for listing. The -n option suppresses the command numbers when listing. The -r option reverses the order of the commands. If the -l option is given, the commands are listed on standard output. Otherwise, the editor given by ename is invoked on a file containing those commands. If ename is not given, the value of the FCEDIT variable is used, and the value of EDITOR if FCEDIT is not set. If neither variable is set, vi is used. When editing is complete, the edited commands are echoed and executed. In the second form, command is re-executed after each instance of pat is replaced by rep. Command is interpreted the same as first above. A useful alias to use with this is ``r="fc -s"'', so that typing ``r cc'' runs the last command beginning with ``cc'' and typing ``r'' re-executes the last command. If the first form is used, the return value is 0 unless an invalid option is encountered or first or last specify history lines out of range. If the -e option is supplied, the return value is the value of the last command executed or failure if an error occurs with the temporary file of commands. If the second form is used, the return status is that of the command re-executed, unless cmd does not specify a valid history line, in which case fc returns failure. fg [jobspec] Resume jobspec in the foreground, and make it the current job. If jobspec is not present, the shell's notion of the current job is used. The return value is that of the command placed into the foreground, or failure if run when job control is disabled or, when run with job control enabled, if jobspec does not specify a valid job or jobspec specifies a job that was started without job control. getopts optstring name [arg ...] getopts is used by shell procedures to parse positional parameters. optstring contains the option characters to be recognized; if a character is followed by a colon, the option is expected to have an argument, which should be separated from it by white space. The colon and question mark characters may not be used as option characters. Each time it is invoked, getopts places the next option in the shell variable name, initializing name if it does not exist, and the index of the next argument to be processed into the variable OPTIND. OPTIND is initialized to 1 each time the shell or a shell script is invoked. When an option requires an argument, getopts places that argument into the variable OPTARG. The shell does not reset OPTIND automatically; it must be manually reset between multiple calls to getopts within the same shell invocation if a new set of parameters is to be used. When the end of options is encountered, getopts exits with a return value greater than zero. OPTIND is set to the index of the first non-option argument, and name is set to ?. getopts normally parses the positional parameters, but if more arguments are supplied as arg values, getopts parses those instead. getopts can report errors in two ways. If the first character of optstring is a colon, silent error reporting is used. In normal operation, diagnostic messages are printed when invalid options or missing option arguments are encountered. If the variable OPTERR is set to 0, no error messages will be displayed, even if the first character of optstring is not a colon. If an invalid option is seen, getopts places ? into name and, if not silent, prints an error message and unsets OPTARG. If getopts is silent, the option character found is placed in OPTARG and no diagnostic message is printed. If a required argument is not found, and getopts is not silent, a question mark (?) is placed in name, OPTARG is unset, and a diagnostic message is printed. If getopts is silent, then a colon (:) is placed in name and OPTARG is set to the option character found. getopts returns true if an option, specified or unspecified, is found. It returns false if the end of options is encountered or an error occurs. hash [-lr] [-p filename] [-dt] [name] Each time hash is invoked, the full pathname of the command name is determined by searching the directories in $PATH and remembered. Any previously-remembered pathname is discarded. If the -p option is supplied, no path search is performed, and filename is used as the full filename of the command. The -r option causes the shell to forget all remembered locations. The -d option causes the shell to forget the remembered location of each name. If the -t option is supplied, the full pathname to which each name corresponds is printed. If multiple name arguments are supplied with -t, the name is printed before the hashed full pathname. The -l option causes output to be displayed in a format that may be reused as input. If no arguments are given, or if only -l is supplied, information about remembered commands is printed. The return status is true unless a name is not found or an invalid option is supplied. help [-dms] [pattern] Display helpful information about builtin commands. If pattern is specified, help gives detailed help on all commands matching pattern; otherwise help for all the builtins and shell control structures is printed. -d Display a short description of each pattern -m Display the description of each pattern in a manpage-like format -s Display only a short usage synopsis for each pattern The return status is 0 unless no command matches pattern. history [n] history -c history -d offset history -d start-end history -anrw [filename] history -p arg [arg ...] history -s arg [arg ...] With no options, display the command history list with line numbers. Lines listed with a * have been modified. An argument of n lists only the last n lines. If the shell variable HISTTIMEFORMAT is set and not null, it is used as a format string for strftime(3) to display the time stamp associated with each displayed history entry. No intervening blank is printed between the formatted time stamp and the history line. If filename is supplied, it is used as the name of the history file; if not, the value of HISTFILE is used. Options, if supplied, have the following meanings: -c Clear the history list by deleting all the entries. -d offset Delete the history entry at position offset. If offset is negative, it is interpreted as relative to one greater than the last history position, so negative indices count back from the end of the history, and an index of -1 refers to the current history -d command. -d start-end Delete the range of history entries between positions start and end, inclusive. Positive and negative values for start and end are interpreted as described above. -a Append the ``new'' history lines to the history file. These are history lines entered since the beginning of the current bash session, but not already appended to the history file. -n Read the history lines not already read from the history file into the current history list. These are lines appended to the history file since the beginning of the current bash session. -r Read the contents of the history file and append them to the current history list. -w Write the current history list to the history file, overwriting the history file's contents. -p Perform history substitution on the following args and display the result on the standard output. Does not store the results in the history list. Each arg must be quoted to disable normal history expansion. -s Store the args in the history list as a single entry. The last command in the history list is removed before the args are added. If the HISTTIMEFORMAT variable is set, the time stamp information associated with each history entry is written to the history file, marked with the history comment character. When the history file is read, lines beginning with the history comment character followed immediately by a digit are interpreted as timestamps for the following history entry. The return value is 0 unless an invalid option is encountered, an error occurs while reading or writing the history file, an invalid offset or range is supplied as an argument to -d, or the history expansion supplied as an argument to -p fails. jobs [-lnprs] [ jobspec ... ] jobs -x command [ args ... ] The first form lists the active jobs. The options have the following meanings: -l List process IDs in addition to the normal information. -n Display information only about jobs that have changed status since the user was last notified of their status. -p List only the process ID of the job's process group leader. -r Display only running jobs. -s Display only stopped jobs. If jobspec is given, output is restricted to information about that job. The return status is 0 unless an invalid option is encountered or an invalid jobspec is supplied. If the -x option is supplied, jobs replaces any jobspec found in command or args with the corresponding process group ID, and executes command passing it args, returning its exit status. kill [-s sigspec | -n signum | -sigspec] [pid | jobspec] ... kill -l|-L [sigspec | exit_status] Send the signal named by sigspec or signum to the processes named by pid or jobspec. sigspec is either a case-insensitive signal name such as SIGKILL (with or without the SIG prefix) or a signal number; signum is a signal number. If sigspec is not present, then SIGTERM is assumed. An argument of -l lists the signal names. If any arguments are supplied when -l is given, the names of the signals corresponding to the arguments are listed, and the return status is 0. The exit_status argument to -l is a number specifying either a signal number or the exit status of a process terminated by a signal. The -L option is equivalent to -l. kill returns true if at least one signal was successfully sent, or false if an error occurs or an invalid option is encountered. let arg [arg ...] Each arg is an arithmetic expression to be evaluated (see ARITHMETIC EVALUATION above). If the last arg evaluates to 0, let returns 1; 0 is returned otherwise. local [option] [name[=value] ... | - ] For each argument, a local variable named name is created, and assigned value. The option can be any of the options accepted by declare. When local is used within a function, it causes the variable name to have a visible scope restricted to that function and its children. If name is -, the set of shell options is made local to the function in which local is invoked: shell options changed using the set builtin inside the function are restored to their original values when the function returns. The restore is effected as if a series of set commands were executed to restore the values that were in place before the function. With no operands, local writes a list of local variables to the standard output. It is an error to use local when not within a function. The return status is 0 unless local is used outside a function, an invalid name is supplied, or name is a readonly variable. logout Exit a login shell. mapfile [-d delim] [-n count] [-O origin] [-s count] [-t] [-u fd] [-C callback] [-c quantum] [array] readarray [-d delim] [-n count] [-O origin] [-s count] [-t] [-u fd] [-C callback] [-c quantum] [array] Read lines from the standard input into the indexed array variable array, or from file descriptor fd if the -u option is supplied. The variable MAPFILE is the default array. Options, if supplied, have the following meanings: -d The first character of delim is used to terminate each input line, rather than newline. If delim is the empty string, mapfile will terminate a line when it reads a NUL character. -n Copy at most count lines. If count is 0, all lines are copied. -O Begin assigning to array at index origin. The default index is 0. -s Discard the first count lines read. -t Remove a trailing delim (default newline) from each line read. -u Read lines from file descriptor fd instead of the standard input. -C Evaluate callback each time quantum lines are read. The -c option specifies quantum. -c Specify the number of lines read between each call to callback. If -C is specified without -c, the default quantum is 5000. When callback is evaluated, it is supplied the index of the next array element to be assigned and the line to be assigned to that element as additional arguments. callback is evaluated after the line is read but before the array element is assigned. If not supplied with an explicit origin, mapfile will clear array before assigning to it. mapfile returns successfully unless an invalid option or option argument is supplied, array is invalid or unassignable, or if array is not an indexed array. popd [-n] [+n] [-n] Removes entries from the directory stack. The elements are numbered from 0 starting at the first directory listed by dirs. With no arguments, popd removes the top directory from the stack, and changes to the new top directory. Arguments, if supplied, have the following meanings: -n Suppresses the normal change of directory when removing directories from the stack, so that only the stack is manipulated. +n Removes the nth entry counting from the left of the list shown by dirs, starting with zero, from the stack. For example: ``popd +0'' removes the first directory, ``popd +1'' the second. -n Removes the nth entry counting from the right of the list shown by dirs, starting with zero. For example: ``popd -0'' removes the last directory, ``popd -1'' the next to last. If the top element of the directory stack is modified, and the -n option was not supplied, popd uses the cd builtin to change to the directory at the top of the stack. If the cd fails, popd returns a non-zero value. Otherwise, popd returns false if an invalid option is encountered, the directory stack is empty, or a non- existent directory stack entry is specified. If the popd command is successful, bash runs dirs to show the final contents of the directory stack, and the return status is 0. printf [-v var] format [arguments] Write the formatted arguments to the standard output under the control of the format. The -v option causes the output to be assigned to the variable var rather than being printed to the standard output. The format is a character string which contains three types of objects: plain characters, which are simply copied to standard output, character escape sequences, which are converted and copied to the standard output, and format specifications, each of which causes printing of the next successive argument. In addition to the standard printf(1) format specifications, printf interprets the following extensions: %b causes printf to expand backslash escape sequences in the corresponding argument in the same way as echo -e. %q causes printf to output the corresponding argument in a format that can be reused as shell input. %Q like %q, but applies any supplied precision to the argument before quoting it. %(datefmt)T causes printf to output the date-time string resulting from using datefmt as a format string for strftime(3). The corresponding argument is an integer representing the number of seconds since the epoch. Two special argument values may be used: -1 represents the current time, and -2 represents the time the shell was invoked. If no argument is specified, conversion behaves as if -1 had been given. This is an exception to the usual printf behavior. The %b, %q, and %T directives all use the field width and precision arguments from the format specification and write that many bytes from (or use that wide a field for) the expanded argument, which usually contains more characters than the original. Arguments to non-string format specifiers are treated as C constants, except that a leading plus or minus sign is allowed, and if the leading character is a single or double quote, the value is the ASCII value of the following character. The format is reused as necessary to consume all of the arguments. If the format requires more arguments than are supplied, the extra format specifications behave as if a zero value or null string, as appropriate, had been supplied. The return value is zero on success, non-zero on failure. pushd [-n] [+n] [-n] pushd [-n] [dir] Adds a directory to the top of the directory stack, or rotates the stack, making the new top of the stack the current working directory. With no arguments, pushd exchanges the top two elements of the directory stack. Arguments, if supplied, have the following meanings: -n Suppresses the normal change of directory when rotating or adding directories to the stack, so that only the stack is manipulated. +n Rotates the stack so that the nth directory (counting from the left of the list shown by dirs, starting with zero) is at the top. -n Rotates the stack so that the nth directory (counting from the right of the list shown by dirs, starting with zero) is at the top. dir Adds dir to the directory stack at the top After the stack has been modified, if the -n option was not supplied, pushd uses the cd builtin to change to the directory at the top of the stack. If the cd fails, pushd returns a non-zero value. Otherwise, if no arguments are supplied, pushd returns 0 unless the directory stack is empty. When rotating the directory stack, pushd returns 0 unless the directory stack is empty or a non-existent directory stack element is specified. If the pushd command is successful, bash runs dirs to show the final contents of the directory stack. pwd [-LP] Print the absolute pathname of the current working directory. The pathname printed contains no symbolic links if the -P option is supplied or the -o physical option to the set builtin command is enabled. If the -L option is used, the pathname printed may contain symbolic links. The return status is 0 unless an error occurs while reading the name of the current directory or an invalid option is supplied. read [-ers] [-a aname] [-d delim] [-i text] [-n nchars] [-N nchars] [-p prompt] [-t timeout] [-u fd] [name ...] One line is read from the standard input, or from the file descriptor fd supplied as an argument to the -u option, split into words as described above under Word Splitting, and the first word is assigned to the first name, the second word to the second name, and so on. If there are more words than names, the remaining words and their intervening delimiters are assigned to the last name. If there are fewer words read from the input stream than names, the remaining names are assigned empty values. The characters in IFS are used to split the line into words using the same rules the shell uses for expansion (described above under Word Splitting). The backslash character (\) may be used to remove any special meaning for the next character read and for line continuation. Options, if supplied, have the following meanings: -a aname The words are assigned to sequential indices of the array variable aname, starting at 0. aname is unset before any new values are assigned. Other name arguments are ignored. -d delim The first character of delim is used to terminate the input line, rather than newline. If delim is the empty string, read will terminate a line when it reads a NUL character. -e If the standard input is coming from a terminal, readline (see READLINE above) is used to obtain the line. Readline uses the current (or default, if line editing was not previously active) editing settings, but uses readline's default filename completion. -i text If readline is being used to read the line, text is placed into the editing buffer before editing begins. -n nchars read returns after reading nchars characters rather than waiting for a complete line of input, but honors a delimiter if fewer than nchars characters are read before the delimiter. -N nchars read returns after reading exactly nchars characters rather than waiting for a complete line of input, unless EOF is encountered or read times out. Delimiter characters encountered in the input are not treated specially and do not cause read to return until nchars characters are read. The result is not split on the characters in IFS; the intent is that the variable is assigned exactly the characters read (with the exception of backslash; see the -r option below). -p prompt Display prompt on standard error, without a trailing newline, before attempting to read any input. The prompt is displayed only if input is coming from a terminal. -r Backslash does not act as an escape character. The backslash is considered to be part of the line. In particular, a backslash-newline pair may not then be used as a line continuation. -s Silent mode. If input is coming from a terminal, characters are not echoed. -t timeout Cause read to time out and return failure if a complete line of input (or a specified number of characters) is not read within timeout seconds. timeout may be a decimal number with a fractional portion following the decimal point. This option is only effective if read is reading input from a terminal, pipe, or other special file; it has no effect when reading from regular files. If read times out, read saves any partial input read into the specified variable name. If timeout is 0, read returns immediately, without trying to read any data. The exit status is 0 if input is available on the specified file descriptor, or the read will return EOF, non-zero otherwise. The exit status is greater than 128 if the timeout is exceeded. -u fd Read input from file descriptor fd. If no names are supplied, the line read, without the ending delimiter but otherwise unmodified, is assigned to the variable REPLY. The exit status is zero, unless end- of-file is encountered, read times out (in which case the status is greater than 128), a variable assignment error (such as assigning to a readonly variable) occurs, or an invalid file descriptor is supplied as the argument to -u. readonly [-aAf] [-p] [name[=word] ...] The given names are marked readonly; the values of these names may not be changed by subsequent assignment. If the -f option is supplied, the functions corresponding to the names are so marked. The -a option restricts the variables to indexed arrays; the -A option restricts the variables to associative arrays. If both options are supplied, -A takes precedence. If no name arguments are given, or if the -p option is supplied, a list of all readonly names is printed. The other options may be used to restrict the output to a subset of the set of readonly names. The -p option causes output to be displayed in a format that may be reused as input. If a variable name is followed by =word, the value of the variable is set to word. The return status is 0 unless an invalid option is encountered, one of the names is not a valid shell variable name, or -f is supplied with a name that is not a function. return [n] Causes a function to stop executing and return the value specified by n to its caller. If n is omitted, the return status is that of the last command executed in the function body. If return is executed by a trap handler, the last command used to determine the status is the last command executed before the trap handler. If return is executed during a DEBUG trap, the last command used to determine the status is the last command executed by the trap handler before return was invoked. If return is used outside a function, but during execution of a script by the . (source) command, it causes the shell to stop executing that script and return either n or the exit status of the last command executed within the script as the exit status of the script. If n is supplied, the return value is its least significant 8 bits. The return status is non-zero if return is supplied a non-numeric argument, or is used outside a function and not during execution of a script by . or source. Any command associated with the RETURN trap is executed before execution resumes after the function or script. set [-abefhkmnptuvxBCEHPT] [-o option-name] [--] [-] [arg ...] set [+abefhkmnptuvxBCEHPT] [+o option-name] [--] [-] [arg ...] Without options, display the name and value of each shell variable in a format that can be reused as input for setting or resetting the currently-set variables. Read- only variables cannot be reset. In posix mode, only shell variables are listed. The output is sorted according to the current locale. When options are specified, they set or unset shell attributes. Any arguments remaining after option processing are treated as values for the positional parameters and are assigned, in order, to $1, $2, ... $n. Options, if specified, have the following meanings: -a Each variable or function that is created or modified is given the export attribute and marked for export to the environment of subsequent commands. -b Report the status of terminated background jobs immediately, rather than before the next primary prompt. This is effective only when job control is enabled. -e Exit immediately if a pipeline (which may consist of a single simple command), a list, or a compound command (see SHELL GRAMMAR above), exits with a non-zero status. The shell does not exit if the command that fails is part of the command list immediately following a while or until keyword, part of the test following the if or elif reserved words, part of any command executed in a && or || list except the command following the final && or ||, any command in a pipeline but the last, or if the command's return value is being inverted with !. If a compound command other than a subshell returns a non-zero status because a command failed while -e was being ignored, the shell does not exit. A trap on ERR, if set, is executed before the shell exits. This option applies to the shell environment and each subshell environment separately (see COMMAND EXECUTION ENVIRONMENT above), and may cause subshells to exit before executing all the commands in the subshell. If a compound command or shell function executes in a context where -e is being ignored, none of the commands executed within the compound command or function body will be affected by the -e setting, even if -e is set and a command returns a failure status. If a compound command or shell function sets -e while executing in a context where -e is ignored, that setting will not have any effect until the compound command or the command containing the function call completes. -f Disable pathname expansion. -h Remember the location of commands as they are looked up for execution. This is enabled by default. -k All arguments in the form of assignment statements are placed in the environment for a command, not just those that precede the command name. -m Monitor mode. Job control is enabled. This option is on by default for interactive shells on systems that support it (see JOB CONTROL above). All processes run in a separate process group. When a background job completes, the shell prints a line containing its exit status. -n Read commands but do not execute them. This may be used to check a shell script for syntax errors. This is ignored by interactive shells. -o option-name The option-name can be one of the following: allexport Same as -a. braceexpand Same as -B. emacs Use an emacs-style command line editing interface. This is enabled by default when the shell is interactive, unless the shell is started with the --noediting option. This also affects the editing interface used for read -e. errexit Same as -e. errtrace Same as -E. functrace Same as -T. hashall Same as -h. histexpand Same as -H. history Enable command history, as described above under HISTORY. This option is on by default in interactive shells. ignoreeof The effect is as if the shell command ``IGNOREEOF=10'' had been executed (see Shell Variables above). keyword Same as -k. monitor Same as -m. noclobber Same as -C. noexec Same as -n. noglob Same as -f. nolog Currently ignored. notify Same as -b. nounset Same as -u. onecmd Same as -t. physical Same as -P. pipefail If set, the return value of a pipeline is the value of the last (rightmost) command to exit with a non-zero status, or zero if all commands in the pipeline exit successfully. This option is disabled by default. posix Change the behavior of bash where the default operation differs from the POSIX standard to match the standard (posix mode). See SEE ALSO below for a reference to a document that details how posix mode affects bash's behavior. privileged Same as -p. verbose Same as -v. vi Use a vi-style command line editing interface. This also affects the editing interface used for read -e. xtrace Same as -x. If -o is supplied with no option-name, the values of the current options are printed. If +o is supplied with no option-name, a series of set commands to recreate the current option settings is displayed on the standard output. -p Turn on privileged mode. In this mode, the $ENV and $BASH_ENV files are not processed, shell functions are not inherited from the environment, and the SHELLOPTS, BASHOPTS, CDPATH, and GLOBIGNORE variables, if they appear in the environment, are ignored. If the shell is started with the effective user (group) id not equal to the real user (group) id, and the -p option is not supplied, these actions are taken and the effective user id is set to the real user id. If the -p option is supplied at startup, the effective user id is not reset. Turning this option off causes the effective user and group ids to be set to the real user and group ids. -r Enable restricted shell mode. This option cannot be unset once it has been set. -t Exit after reading and executing one command. -u Treat unset variables and parameters other than the special parameters "@" and "*", or array variables subscripted with "@" or "*", as an error when performing parameter expansion. If expansion is attempted on an unset variable or parameter, the shell prints an error message, and, if not interactive, exits with a non-zero status. -v Print shell input lines as they are read. -x After expanding each simple command, for command, case command, select command, or arithmetic for command, display the expanded value of PS4, followed by the command and its expanded arguments or associated word list. -B The shell performs brace expansion (see Brace Expansion above). This is on by default. -C If set, bash does not overwrite an existing file with the >, >&, and <> redirection operators. This may be overridden when creating output files by using the redirection operator >| instead of >. -E If set, any trap on ERR is inherited by shell functions, command substitutions, and commands executed in a subshell environment. The ERR trap is normally not inherited in such cases. -H Enable ! style history substitution. This option is on by default when the shell is interactive. -P If set, the shell does not resolve symbolic links when executing commands such as cd that change the current working directory. It uses the physical directory structure instead. By default, bash follows the logical chain of directories when performing commands which change the current directory. -T If set, any traps on DEBUG and RETURN are inherited by shell functions, command substitutions, and commands executed in a subshell environment. The DEBUG and RETURN traps are normally not inherited in such cases. -- If no arguments follow this option, then the positional parameters are unset. Otherwise, the positional parameters are set to the args, even if some of them begin with a -. - Signal the end of options, cause all remaining args to be assigned to the positional parameters. The -x and -v options are turned off. If there are no args, the positional parameters remain unchanged. The options are off by default unless otherwise noted. Using + rather than - causes these options to be turned off. The options can also be specified as arguments to an invocation of the shell. The current set of options may be found in $-. The return status is always true unless an invalid option is encountered. shift [n] The positional parameters from n+1 ... are renamed to $1 .... Parameters represented by the numbers $# down to $#-n+1 are unset. n must be a non-negative number less than or equal to $#. If n is 0, no parameters are changed. If n is not given, it is assumed to be 1. If n is greater than $#, the positional parameters are not changed. The return status is greater than zero if n is greater than $# or less than zero; otherwise 0. shopt [-pqsu] [-o] [optname ...] Toggle the values of settings controlling optional shell behavior. The settings can be either those listed below, or, if the -o option is used, those available with the -o option to the set builtin command. With no options, or with the -p option, a list of all settable options is displayed, with an indication of whether or not each is set; if optnames are supplied, the output is restricted to those options. The -p option causes output to be displayed in a form that may be reused as input. Other options have the following meanings: -s Enable (set) each optname. -u Disable (unset) each optname. -q Suppresses normal output (quiet mode); the return status indicates whether the optname is set or unset. If multiple optname arguments are given with -q, the return status is zero if all optnames are enabled; non-zero otherwise. -o Restricts the values of optname to be those defined for the -o option to the set builtin. If either -s or -u is used with no optname arguments, shopt shows only those options which are set or unset, respectively. Unless otherwise noted, the shopt options are disabled (unset) by default. The return status when listing options is zero if all optnames are enabled, non-zero otherwise. When setting or unsetting options, the return status is zero unless an optname is not a valid shell option. The list of shopt options is: assoc_expand_once If set, the shell suppresses multiple evaluation of associative array subscripts during arithmetic expression evaluation, while executing builtins that can perform variable assignments, and while executing builtins that perform array dereferencing. autocd If set, a command name that is the name of a directory is executed as if it were the argument to the cd command. This option is only used by interactive shells. cdable_vars If set, an argument to the cd builtin command that is not a directory is assumed to be the name of a variable whose value is the directory to change to. cdspell If set, minor errors in the spelling of a directory component in a cd command will be corrected. The errors checked for are transposed characters, a missing character, and one character too many. If a correction is found, the corrected filename is printed, and the command proceeds. This option is only used by interactive shells. checkhash If set, bash checks that a command found in the hash table exists before trying to execute it. If a hashed command no longer exists, a normal path search is performed. checkjobs If set, bash lists the status of any stopped and running jobs before exiting an interactive shell. If any jobs are running, this causes the exit to be deferred until a second exit is attempted without an intervening command (see JOB CONTROL above). The shell always postpones exiting if any jobs are stopped. checkwinsize If set, bash checks the window size after each external (non-builtin) command and, if necessary, updates the values of LINES and COLUMNS. This option is enabled by default. cmdhist If set, bash attempts to save all lines of a multiple-line command in the same history entry. This allows easy re-editing of multi-line commands. This option is enabled by default, but only has an effect if command history is enabled, as described above under HISTORY. compat31 compat32 compat40 compat41 compat42 compat43 compat44 compat50 These control aspects of the shell's compatibility mode (see SHELL COMPATIBILITY MODE below). complete_fullquote If set, bash quotes all shell metacharacters in filenames and directory names when performing completion. If not set, bash removes metacharacters such as the dollar sign from the set of characters that will be quoted in completed filenames when these metacharacters appear in shell variable references in words to be completed. This means that dollar signs in variable names that expand to directories will not be quoted; however, any dollar signs appearing in filenames will not be quoted, either. This is active only when bash is using backslashes to quote completed filenames. This variable is set by default, which is the default bash behavior in versions through 4.2. direxpand If set, bash replaces directory names with the results of word expansion when performing filename completion. This changes the contents of the readline editing buffer. If not set, bash attempts to preserve what the user typed. dirspell If set, bash attempts spelling correction on directory names during word completion if the directory name initially supplied does not exist. dotglob If set, bash includes filenames beginning with a `.' in the results of pathname expansion. The filenames ``.'' and ``..'' must always be matched explicitly, even if dotglob is set. execfail If set, a non-interactive shell will not exit if it cannot execute the file specified as an argument to the exec builtin command. An interactive shell does not exit if exec fails. expand_aliases If set, aliases are expanded as described above under ALIASES. This option is enabled by default for interactive shells. extdebug If set at shell invocation, or in a shell startup file, arrange to execute the debugger profile before the shell starts, identical to the --debugger option. If set after invocation, behavior intended for use by debuggers is enabled: 1. The -F option to the declare builtin displays the source file name and line number corresponding to each function name supplied as an argument. 2. If the command run by the DEBUG trap returns a non-zero value, the next command is skipped and not executed. 3. If the command run by the DEBUG trap returns a value of 2, and the shell is executing in a subroutine (a shell function or a shell script executed by the . or source builtins), the shell simulates a call to return. 4. BASH_ARGC and BASH_ARGV are updated as described in their descriptions above). 5. Function tracing is enabled: command substitution, shell functions, and subshells invoked with ( command ) inherit the DEBUG and RETURN traps. 6. Error tracing is enabled: command substitution, shell functions, and subshells invoked with ( command ) inherit the ERR trap. extglob If set, the extended pattern matching features described above under Pathname Expansion are enabled. extquote If set, $'string' and $"string" quoting is performed within ${parameter} expansions enclosed in double quotes. This option is enabled by default. failglob If set, patterns which fail to match filenames during pathname expansion result in an expansion error. force_fignore If set, the suffixes specified by the FIGNORE shell variable cause words to be ignored when performing word completion even if the ignored words are the only possible completions. See SHELL VARIABLES above for a description of FIGNORE. This option is enabled by default. globasciiranges If set, range expressions used in pattern matching bracket expressions (see Pattern Matching above) behave as if in the traditional C locale when performing comparisons. That is, the current locale's collating sequence is not taken into account, so b will not collate between A and B, and upper-case and lower-case ASCII characters will collate together. globskipdots If set, pathname expansion will never match the filenames ``.'' and ``..'', even if the pattern begins with a ``.''. This option is enabled by default. globstar If set, the pattern ** used in a pathname expansion context will match all files and zero or more directories and subdirectories. If the pattern is followed by a /, only directories and subdirectories match. gnu_errfmt If set, shell error messages are written in the standard GNU error message format. histappend If set, the history list is appended to the file named by the value of the HISTFILE variable when the shell exits, rather than overwriting the file. histreedit If set, and readline is being used, a user is given the opportunity to re-edit a failed history substitution. histverify If set, and readline is being used, the results of history substitution are not immediately passed to the shell parser. Instead, the resulting line is loaded into the readline editing buffer, allowing further modification. hostcomplete If set, and readline is being used, bash will attempt to perform hostname completion when a word containing a @ is being completed (see Completing under READLINE above). This is enabled by default. huponexit If set, bash will send SIGHUP to all jobs when an interactive login shell exits. inherit_errexit If set, command substitution inherits the value of the errexit option, instead of unsetting it in the subshell environment. This option is enabled when posix mode is enabled. interactive_comments If set, allow a word beginning with # to cause that word and all remaining characters on that line to be ignored in an interactive shell (see COMMENTS above). This option is enabled by default. lastpipe If set, and job control is not active, the shell runs the last command of a pipeline not executed in the background in the current shell environment. lithist If set, and the cmdhist option is enabled, multi- line commands are saved to the history with embedded newlines rather than using semicolon separators where possible. localvar_inherit If set, local variables inherit the value and attributes of a variable of the same name that exists at a previous scope before any new value is assigned. The nameref attribute is not inherited. localvar_unset If set, calling unset on local variables in previous function scopes marks them so subsequent lookups find them unset until that function returns. This is identical to the behavior of unsetting local variables at the current function scope. login_shell The shell sets this option if it is started as a login shell (see INVOCATION above). The value may not be changed. mailwarn If set, and a file that bash is checking for mail has been accessed since the last time it was checked, the message ``The mail in mailfile has been read'' is displayed. no_empty_cmd_completion If set, and readline is being used, bash will not attempt to search the PATH for possible completions when completion is attempted on an empty line. nocaseglob If set, bash matches filenames in a case-insensitive fashion when performing pathname expansion (see Pathname Expansion above). nocasematch If set, bash matches patterns in a case-insensitive fashion when performing matching while executing case or [[ conditional commands, when performing pattern substitution word expansions, or when filtering possible completions as part of programmable completion. noexpand_translation If set, bash encloses the translated results of $"..." quoting in single quotes instead of double quotes. If the string is not translated, this has no effect. nullglob If set, bash allows patterns which match no files (see Pathname Expansion above) to expand to a null string, rather than themselves. patsub_replacement If set, bash expands occurrences of & in the replacement string of pattern substitution to the text matched by the pattern, as described under Parameter Expansion above. This option is enabled by default. progcomp If set, the programmable completion facilities (see Programmable Completion above) are enabled. This option is enabled by default. progcomp_alias If set, and programmable completion is enabled, bash treats a command name that doesn't have any completions as a possible alias and attempts alias expansion. If it has an alias, bash attempts programmable completion using the command word resulting from the expanded alias. promptvars If set, prompt strings undergo parameter expansion, command substitution, arithmetic expansion, and quote removal after being expanded as described in PROMPTING above. This option is enabled by default. restricted_shell The shell sets this option if it is started in restricted mode (see RESTRICTED SHELL below). The value may not be changed. This is not reset when the startup files are executed, allowing the startup files to discover whether or not a shell is restricted. shift_verbose If set, the shift builtin prints an error message when the shift count exceeds the number of positional parameters. sourcepath If set, the . (source) builtin uses the value of PATH to find the directory containing the file supplied as an argument. This option is enabled by default. varredir_close If set, the shell automatically closes file descriptors assigned using the {varname} redirection syntax (see REDIRECTION above) instead of leaving them open when the command completes. xpg_echo If set, the echo builtin expands backslash-escape sequences by default. suspend [-f] Suspend the execution of this shell until it receives a SIGCONT signal. A login shell, or a shell without job control enabled, cannot be suspended; the -f option can be used to override this and force the suspension. The return status is 0 unless the shell is a login shell or job control is not enabled and -f is not supplied. test expr [ expr ] Return a status of 0 (true) or 1 (false) depending on the evaluation of the conditional expression expr. Each operator and operand must be a separate argument. Expressions are composed of the primaries described above under CONDITIONAL EXPRESSIONS. test does not accept any options, nor does it accept and ignore an argument of -- as signifying the end of options. Expressions may be combined using the following operators, listed in decreasing order of precedence. The evaluation depends on the number of arguments; see below. Operator precedence is used when there are five or more arguments. ! expr True if expr is false. ( expr ) Returns the value of expr. This may be used to override the normal precedence of operators. expr1 -a expr2 True if both expr1 and expr2 are true. expr1 -o expr2 True if either expr1 or expr2 is true. test and [ evaluate conditional expressions using a set of rules based on the number of arguments. 0 arguments The expression is false. 1 argument The expression is true if and only if the argument is not null. 2 arguments If the first argument is !, the expression is true if and only if the second argument is null. If the first argument is one of the unary conditional operators listed above under CONDITIONAL EXPRESSIONS, the expression is true if the unary test is true. If the first argument is not a valid unary conditional operator, the expression is false. 3 arguments The following conditions are applied in the order listed. If the second argument is one of the binary conditional operators listed above under CONDITIONAL EXPRESSIONS, the result of the expression is the result of the binary test using the first and third arguments as operands. The -a and -o operators are considered binary operators when there are three arguments. If the first argument is !, the value is the negation of the two-argument test using the second and third arguments. If the first argument is exactly ( and the third argument is exactly ), the result is the one-argument test of the second argument. Otherwise, the expression is false. 4 arguments The following conditions are applied in the order listed. If the first argument is !, the result is the negation of the three-argument expression composed of the remaining arguments. the two- argument test using the second and third arguments. If the first argument is exactly ( and the fourth argument is exactly ), the result is the two- argument test of the second and third arguments. Otherwise, the expression is parsed and evaluated according to precedence using the rules listed above. 5 or more arguments The expression is parsed and evaluated according to precedence using the rules listed above. When used with test or [, the < and > operators sort lexicographically using ASCII ordering. times Print the accumulated user and system times for the shell and for processes run from the shell. The return status is 0. trap [-lp] [[arg] sigspec ...] The command arg is to be read and executed when the shell receives signal(s) sigspec. If arg is absent (and there is a single sigspec) or -, each specified signal is reset to its original disposition (the value it had upon entrance to the shell). If arg is the null string the signal specified by each sigspec is ignored by the shell and by the commands it invokes. If arg is not present and -p has been supplied, then the trap commands associated with each sigspec are displayed. If no arguments are supplied or if only -p is given, trap prints the list of commands associated with each signal. The -l option causes the shell to print a list of signal names and their corresponding numbers. Each sigspec is either a signal name defined in <signal.h>, or a signal number. Signal names are case insensitive and the SIG prefix is optional. If a sigspec is EXIT (0) the command arg is executed on exit from the shell. If a sigspec is DEBUG, the command arg is executed before every simple command, for command, case command, select command, every arithmetic for command, and before the first command executes in a shell function (see SHELL GRAMMAR above). Refer to the description of the extdebug option to the shopt builtin for details of its effect on the DEBUG trap. If a sigspec is RETURN, the command arg is executed each time a shell function or a script executed with the . or source builtins finishes executing. If a sigspec is ERR, the command arg is executed whenever a pipeline (which may consist of a single simple command), a list, or a compound command returns a non-zero exit status, subject to the following conditions. The ERR trap is not executed if the failed command is part of the command list immediately following a while or until keyword, part of the test in an if statement, part of a command executed in a && or || list except the command following the final && or ||, any command in a pipeline but the last, or if the command's return value is being inverted using !. These are the same conditions obeyed by the errexit (-e) option. Signals ignored upon entry to the shell cannot be trapped or reset. Trapped signals that are not being ignored are reset to their original values in a subshell or subshell environment when one is created. The return status is false if any sigspec is invalid; otherwise trap returns true. type [-aftpP] name [name ...] With no options, indicate how each name would be interpreted if used as a command name. If the -t option is used, type prints a string which is one of alias, keyword, function, builtin, or file if name is an alias, shell reserved word, function, builtin, or disk file, respectively. If the name is not found, then nothing is printed, and an exit status of false is returned. If the -p option is used, type either returns the name of the disk file that would be executed if name were specified as a command name, or nothing if ``type -t name'' would not return file. The -P option forces a PATH search for each name, even if ``type -t name'' would not return file. If a command is hashed, -p and -P print the hashed value, which is not necessarily the file that appears first in PATH. If the -a option is used, type prints all of the places that contain an executable named name. This includes aliases and functions, if and only if the -p option is not also used. The table of hashed commands is not consulted when using -a. The -f option suppresses shell function lookup, as with the command builtin. type returns true if all of the arguments are found, false if any are not found. ulimit [-HS] -a ulimit [-HS] [-bcdefiklmnpqrstuvxPRT [limit]] Provides control over the resources available to the shell and to processes started by it, on systems that allow such control. The -H and -S options specify that the hard or soft limit is set for the given resource. A hard limit cannot be increased by a non-root user once it is set; a soft limit may be increased up to the value of the hard limit. If neither -H nor -S is specified, both the soft and hard limits are set. The value of limit can be a number in the unit specified for the resource or one of the special values hard, soft, or unlimited, which stand for the current hard limit, the current soft limit, and no limit, respectively. If limit is omitted, the current value of the soft limit of the resource is printed, unless the -H option is given. When more than one resource is specified, the limit name and unit, if appropriate, are printed before the value. Other options are interpreted as follows: -a All current limits are reported; no limits are set -b The maximum socket buffer size -c The maximum size of core files created -d The maximum size of a process's data segment -e The maximum scheduling priority ("nice") -f The maximum size of files written by the shell and its children -i The maximum number of pending signals -k The maximum number of kqueues that may be allocated -l The maximum size that may be locked into memory -m The maximum resident set size (many systems do not honor this limit) -n The maximum number of open file descriptors (most systems do not allow this value to be set) -p The pipe size in 512-byte blocks (this may not be set) -q The maximum number of bytes in POSIX message queues -r The maximum real-time scheduling priority -s The maximum stack size -t The maximum amount of cpu time in seconds -u The maximum number of processes available to a single user -v The maximum amount of virtual memory available to the shell and, on some systems, to its children -x The maximum number of file locks -P The maximum number of pseudoterminals -R The maximum time a real-time process can run before blocking, in microseconds -T The maximum number of threads If limit is given, and the -a option is not used, limit is the new value of the specified resource. If no option is given, then -f is assumed. Values are in 1024-byte increments, except for -t, which is in seconds; -R, which is in microseconds; -p, which is in units of 512-byte blocks; -P, -T, -b, -k, -n, and -u, which are unscaled values; and, when in posix mode, -c and -f, which are in 512-byte increments. The return status is 0 unless an invalid option or argument is supplied, or an error occurs while setting a new limit. umask [-p] [-S] [mode] The user file-creation mask is set to mode. If mode begins with a digit, it is interpreted as an octal number; otherwise it is interpreted as a symbolic mode mask similar to that accepted by chmod(1). If mode is omitted, the current value of the mask is printed. The -S option causes the mask to be printed in symbolic form; the default output is an octal number. If the -p option is supplied, and mode is omitted, the output is in a form that may be reused as input. The return status is 0 if the mode was successfully changed or if no mode argument was supplied, and false otherwise. unalias [-a] [name ...] Remove each name from the list of defined aliases. If -a is supplied, all alias definitions are removed. The return value is true unless a supplied name is not a defined alias. unset [-fv] [-n] [name ...] For each name, remove the corresponding variable or function. If the -v option is given, each name refers to a shell variable, and that variable is removed. Read-only variables may not be unset. If -f is specified, each name refers to a shell function, and the function definition is removed. If the -n option is supplied, and name is a variable with the nameref attribute, name will be unset rather than the variable it references. -n has no effect if the -f option is supplied. If no options are supplied, each name refers to a variable; if there is no variable by that name, a function with that name, if any, is unset. Each unset variable or function is removed from the environment passed to subsequent commands. If any of BASH_ALIASES, BASH_ARGV0, BASH_CMDS, BASH_COMMAND, BASH_SUBSHELL, BASHPID, COMP_WORDBREAKS, DIRSTACK, EPOCHREALTIME, EPOCHSECONDS, FUNCNAME, GROUPS, HISTCMD, LINENO, RANDOM, SECONDS, or SRANDOM are unset, they lose their special properties, even if they are subsequently reset. The exit status is true unless a name is readonly or may not be unset. wait [-fn] [-p varname] [id ...] Wait for each specified child process and return its termination status. Each id may be a process ID or a job specification; if a job spec is given, all processes in that job's pipeline are waited for. If id is not given, wait waits for all running background jobs and the last- executed process substitution, if its process id is the same as $!, and the return status is zero. If the -n option is supplied, wait waits for a single job from the list of ids or, if no ids are supplied, any job, to complete and returns its exit status. If none of the supplied arguments is a child of the shell, or if no arguments are supplied and the shell has no unwaited-for children, the exit status is 127. If the -p option is supplied, the process or job identifier of the job for which the exit status is returned is assigned to the variable varname named by the option argument. The variable will be unset initially, before any assignment. This is useful only when the -n option is supplied. Supplying the -f option, when job control is enabled, forces wait to wait for id to terminate before returning its status, instead of returning when it changes status. If id specifies a non-existent process or job, the return status is 127. If wait is interrupted by a signal, the return status will be greater than 128, as described under SIGNALS above. Otherwise, the return status is the exit status of the last process or job waited for. SHELL COMPATIBILITY MODE top Bash-4.0 introduced the concept of a shell compatibility level, specified as a set of options to the shopt builtin ( compat31, compat32, compat40, compat41, and so on). There is only one current compatibility level -- each option is mutually exclusive. The compatibility level is intended to allow users to select behavior from previous versions that is incompatible with newer versions while they migrate scripts to use current features and behavior. It's intended to be a temporary solution. This section does not mention behavior that is standard for a particular version (e.g., setting compat32 means that quoting the rhs of the regexp matching operator quotes special regexp characters in the word, which is default behavior in bash-3.2 and subsequent versions). If a user enables, say, compat32, it may affect the behavior of other compatibility levels up to and including the current compatibility level. The idea is that each compatibility level controls behavior that changed in that version of bash, but that behavior may have been present in earlier versions. For instance, the change to use locale-based comparisons with the [[ command came in bash-4.1, and earlier versions used ASCII-based comparisons, so enabling compat32 will enable ASCII-based comparisons as well. That granularity may not be sufficient for all uses, and as a result users should employ compatibility levels carefully. Read the documentation for a particular feature to find out the current behavior. Bash-4.3 introduced a new shell variable: BASH_COMPAT. The value assigned to this variable (a decimal version number like 4.2, or an integer corresponding to the compatNN option, like 42) determines the compatibility level. Starting with bash-4.4, Bash has begun deprecating older compatibility levels. Eventually, the options will be removed in favor of BASH_COMPAT. Bash-5.0 is the final version for which there will be an individual shopt option for the previous version. Users should use BASH_COMPAT on bash-5.0 and later versions. The following table describes the behavior changes controlled by each compatibility level setting. The compatNN tag is used as shorthand for setting the compatibility level to NN using one of the following mechanisms. For versions prior to bash-5.0, the compatibility level may be set using the corresponding compatNN shopt option. For bash-4.3 and later versions, the BASH_COMPAT variable is preferred, and it is required for bash-5.1 and later versions. compat31 quoting the rhs of the [[ command's regexp matching operator (=~) has no special effect compat32 interrupting a command list such as "a ; b ; c" causes the execution of the next command in the list (in bash-4.0 and later versions, the shell acts as if it received the interrupt, so interrupting one command in a list aborts the execution of the entire list) compat40 the < and > operators to the [[ command do not consider the current locale when comparing strings; they use ASCII ordering. Bash versions prior to bash-4.1 use ASCII collation and strcmp(3); bash-4.1 and later use the current locale's collation sequence and strcoll(3). compat41 in posix mode, time may be followed by options and still be recognized as a reserved word (this is POSIX interpretation 267) in posix mode, the parser requires that an even number of single quotes occur in the word portion of a double-quoted parameter expansion and treats them specially, so that characters within the single quotes are considered quoted (this is POSIX interpretation 221) compat42 the replacement string in double-quoted pattern substitution does not undergo quote removal, as it does in versions after bash-4.2 in posix mode, single quotes are considered special when expanding the word portion of a double-quoted parameter expansion and can be used to quote a closing brace or other special character (this is part of POSIX interpretation 221); in later versions, single quotes are not special within double-quoted word expansions compat43 the shell does not print a warning message if an attempt is made to use a quoted compound assignment as an argument to declare (e.g., declare -a foo='(1 2)'). Later versions warn that this usage is deprecated word expansion errors are considered non-fatal errors that cause the current command to fail, even in posix mode (the default behavior is to make them fatal errors that cause the shell to exit) when executing a shell function, the loop state (while/until/etc.) is not reset, so break or continue in that function will break or continue loops in the calling context. Bash-4.4 and later reset the loop state to prevent this compat44 the shell sets up the values used by BASH_ARGV and BASH_ARGC so they can expand to the shell's positional parameters even if extended debugging mode is not enabled a subshell inherits loops from its parent context, so break or continue will cause the subshell to exit. Bash-5.0 and later reset the loop state to prevent the exit variable assignments preceding builtins like export and readonly that set attributes continue to affect variables with the same name in the calling environment even if the shell is not in posix mode compat50 Bash-5.1 changed the way $RANDOM is generated to introduce slightly more randomness. If the shell compatibility level is set to 50 or lower, it reverts to the method from bash-5.0 and previous versions, so seeding the random number generator by assigning a value to RANDOM will produce the same sequence as in bash-5.0 If the command hash table is empty, bash versions prior to bash-5.1 printed an informational message to that effect, even when producing output that can be reused as input. Bash-5.1 suppresses that message when the -l option is supplied. compat51 The unset builtin treats attempts to unset array subscripts @ and * differently depending on whether the array is indexed or associative, and differently than in previous versions. RESTRICTED SHELL top If bash is started with the name rbash, or the -r option is supplied at invocation, the shell becomes restricted. A restricted shell is used to set up an environment more controlled than the standard shell. It behaves identically to bash with the exception that the following are disallowed or not performed: changing directories with cd setting or unsetting the values of SHELL, PATH, HISTFILE, ENV, or BASH_ENV specifying command names containing / specifying a filename containing a / as an argument to the . builtin command specifying a filename containing a slash as an argument to the history builtin command specifying a filename containing a slash as an argument to the -p option to the hash builtin command importing function definitions from the shell environment at startup parsing the value of SHELLOPTS from the shell environment at startup redirecting output using the >, >|, <>, >&, &>, and >> redirection operators using the exec builtin command to replace the shell with another command adding or deleting builtin commands with the -f and -d options to the enable builtin command using the enable builtin command to enable disabled shell builtins specifying the -p option to the command builtin command turning off restricted mode with set +r or shopt -u restricted_shell. These restrictions are enforced after any startup files are read. When a command that is found to be a shell script is executed (see COMMAND EXECUTION above), rbash turns off any restrictions in the shell spawned to execute the script. SEE ALSO top Bash Reference Manual, Brian Fox and Chet Ramey The Gnu Readline Library, Brian Fox and Chet Ramey The Gnu History Library, Brian Fox and Chet Ramey Portable Operating System Interface (POSIX) Part 2: Shell and Utilities, IEEE -- http://pubs.opengroup.org/onlinepubs/9699919799/ http://tiswww.case.edu/~chet/bash/POSIX -- a description of posix mode sh(1), ksh(1), csh(1) emacs(1), vi(1) readline(3) FILES top /bin/bash The bash executable /etc/profile The systemwide initialization file, executed for login shells ~/.bash_profile The personal initialization file, executed for login shells ~/.bashrc The individual per-interactive-shell startup file ~/.bash_logout The individual login shell cleanup file, executed when a login shell exits ~/.bash_history The default value of HISTFILE, the file in which bash saves the command history ~/.inputrc Individual readline initialization file AUTHORS top Brian Fox, Free Software Foundation bfox@gnu.org Chet Ramey, Case Western Reserve University chet.ramey@case.edu BUG REPORTS top If you find a bug in bash, you should report it. But first, you should make sure that it really is a bug, and that it appears in the latest version of bash. The latest version is always available from ftp://ftp.gnu.org/pub/gnu/bash/ and http://git.savannah.gnu.org/cgit/bash.git/snapshot/bash- master.tar.gz. Once you have determined that a bug actually exists, use the bashbug command to submit a bug report. If you have a fix, you are encouraged to mail that as well! Suggestions and `philosophical' bug reports may be mailed to bug-bash@gnu.org or posted to the Usenet newsgroup gnu.bash.bug. ALL bug reports should include: The version number of bash The hardware and operating system The compiler used to compile A description of the bug behaviour A short script or `recipe' which exercises the bug bashbug inserts the first three items automatically into the template it provides for filing a bug report. Comments and bug reports concerning this manual page should be directed to chet.ramey@case.edu. BUGS top It's too big and too slow. There are some subtle differences between bash and traditional versions of sh, mostly because of the POSIX specification. Aliases are confusing in some uses. Shell builtin commands and functions are not stoppable/restartable. Compound commands and command sequences of the form `a ; b ; c' are not handled gracefully when process suspension is attempted. When a process is stopped, the shell immediately executes the next command in the sequence. It suffices to place the sequence of commands between parentheses to force it into a subshell, which may be stopped as a unit. Array variables may not (yet) be exported. There may be only one active coprocess at a time. COLOPHON top This page is part of the bash (Bourne again shell) project. Information about the project can be found at http://www.gnu.org/software/bash/. If you have a bug report for this manual page, see http://www.gnu.org/software/bash/. This page was obtained from the project's upstream Git repository git://git.savannah.gnu.org/bash.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-11-14.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU Bash 5.2 2022 September 19 BASH(1) Pages that refer to this page: getopt(1), intro(1), kill(1), pmdabash(1), pv(1), quilt(1), systemctl(1), systemd-notify(1), systemd-run(1), time(1), setpgid(2), getopt(3), history(3), readline(3), strcmp(3), termios(3), ulimit(3), core(5), credentials(7), environ(7), suffixes(7), time_namespaces(7), cupsenable(8), dpkg-fsys-usrunmess(8), wg(8), wg-quick(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # bash\n\n> Bourne-Again SHell, an `sh`-compatible command-line interpreter.\n> See also: `zsh`, `histexpand` (history expansion).\n> More information: <https://www.gnu.org/software/bash/>.\n\n- Start an interactive shell session:\n\n`bash`\n\n- Start an interactive shell session without loading startup configs:\n\n`bash --norc`\n\n- Execute specific [c]ommands:\n\n`bash -c "{{echo 'bash is executed'}}"`\n\n- Execute a specific script:\n\n`bash {{path/to/script.sh}}`\n\n- E[x]ecute a specific script, printing each command before executing it:\n\n`bash -x {{path/to/script.sh}}`\n\n- Execute a specific script and stop at the first [e]rror:\n\n`bash -e {{path/to/script.sh}}`\n\n- Execute specific commands from `stdin`:\n\n`{{echo "echo 'bash is executed'"}} | bash`\n\n- Start a [r]estricted shell session:\n\n`bash -r`\n |
batch | batch(1p) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training batch(1p) Linux manual page PROLOG | NAME | SYNOPSIS | DESCRIPTION | OPTIONS | OPERANDS | STDIN | INPUT FILES | ENVIRONMENT VARIABLES | ASYNCHRONOUS EVENTS | STDOUT | STDERR | OUTPUT FILES | EXTENDED DESCRIPTION | EXIT STATUS | CONSEQUENCES OF ERRORS | APPLICATION USAGE | EXAMPLES | RATIONALE | FUTURE DIRECTIONS | SEE ALSO | COPYRIGHT BATCH(1P) POSIX Programmer's Manual BATCH(1P) PROLOG top This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux. NAME top batch schedule commands to be executed in a batch queue SYNOPSIS top batch DESCRIPTION top The batch utility shall read commands from standard input and schedule them for execution in a batch queue. It shall be the equivalent of the command: at -q b -m now where queue b is a special at queue, specifically for batch jobs. Batch jobs shall be submitted to the batch queue with no time constraints and shall be run by the system using algorithms, based on unspecified factors, that may vary with each invocation of batch. Users shall be permitted to use batch if their name appears in the file at.allow which is located in an implementation-defined directory. If that file does not exist, the file at.deny, which is located in an implementation-defined directory, shall be checked to determine whether the user shall be denied access to batch. If neither file exists, only a process with appropriate privileges shall be allowed to submit a job. If only at.deny exists and is empty, global usage shall be permitted. The at.allow and at.deny files shall consist of one user name per line. OPTIONS top None. OPERANDS top None. STDIN top The standard input shall be a text file consisting of commands acceptable to the shell command language described in Chapter 2, Shell Command Language. INPUT FILES top The text files at.allow and at.deny, which are located in an implementation-defined directory, shall contain zero or more user names, one per line, of users who are, respectively, authorized or denied access to the at and batch utilities. ENVIRONMENT VARIABLES top The following environment variables shall affect the execution of batch: LANG Provide a default value for the internationalization variables that are unset or null. (See the Base Definitions volume of POSIX.12017, Section 8.2, Internationalization Variables for the precedence of internationalization variables used to determine the values of locale categories.) LC_ALL If set to a non-empty string value, override the values of all the other internationalization variables. LC_CTYPE Determine the locale for the interpretation of sequences of bytes of text data as characters (for example, single-byte as opposed to multi-byte characters in arguments and input files). LC_MESSAGES Determine the locale that should be used to affect the format and contents of diagnostic messages written to standard error and informative messages written to standard output. LC_TIME Determine the format and contents for date and time strings written by batch. NLSPATH Determine the location of message catalogs for the processing of LC_MESSAGES. SHELL Determine the name of a command interpreter to be used to invoke the at-job. If the variable is unset or null, sh shall be used. If it is set to a value other than a name for sh, the implementation shall do one of the following: use that shell; use sh; use the login shell from the user database; any of the preceding accompanied by a warning diagnostic about which was chosen. TZ Determine the timezone. The job shall be submitted for execution at the time specified by timespec or -t time relative to the timezone specified by the TZ variable. If timespec specifies a timezone, it overrides TZ. If timespec does not specify a timezone and TZ is unset or null, an unspecified default timezone shall be used. ASYNCHRONOUS EVENTS top Default. STDOUT top When standard input is a terminal, prompts of unspecified format for each line of the user input described in the STDIN section may be written to standard output. STDERR top The following shall be written to standard error when a job has been successfully submitted: "job %s at %s\n", at_job_id, <date> where date shall be equivalent in format to the output of: date +"%a %b %e %T %Y" The date and time written shall be adjusted so that they appear in the timezone of the user (as determined by the TZ variable). Neither this, nor warning messages concerning the selection of the command interpreter, are considered a diagnostic that changes the exit status. Diagnostic messages, if any, shall be written to standard error. OUTPUT FILES top None. EXTENDED DESCRIPTION top None. EXIT STATUS top The following exit values shall be returned: 0 Successful completion. >0 An error occurred. CONSEQUENCES OF ERRORS top The job shall not be scheduled. The following sections are informative. APPLICATION USAGE top It may be useful to redirect standard output within the specified commands. EXAMPLES top 1. This sequence can be used at a terminal: batch sort < file >outfile EOT 2. This sequence, which demonstrates redirecting standard error to a pipe, is useful in a command procedure (the sequence of output redirection specifications is significant): batch <<! diff file1 file2 2>&1 >outfile | mailx mygroup ! RATIONALE top Early proposals described batch in a manner totally separated from at, even though the historical model treated it almost as a synonym for at -qb. A number of features were added to list and control batch work separately from those in at. Upon further reflection, it was decided that the benefit of this did not merit the change to the historical interface. The -m option was included on the equivalent at command because it is historical practice to mail results to the submitter, even if all job-produced output is redirected. As explained in the RATIONALE for at, the now keyword submits the job for immediate execution (after scheduling delays), despite some historical systems where at now would have been considered an error. FUTURE DIRECTIONS top None. SEE ALSO top at(1p) The Base Definitions volume of POSIX.12017, Chapter 8, Environment Variables COPYRIGHT top Portions of this text are reprinted and reproduced in electronic form from IEEE Std 1003.1-2017, Standard for Information Technology -- Portable Operating System Interface (POSIX), The Open Group Base Specifications Issue 7, 2018 Edition, Copyright (C) 2018 by the Institute of Electrical and Electronics Engineers, Inc and The Open Group. In the event of any discrepancy between this version and the original IEEE and The Open Group Standard, the original IEEE and The Open Group Standard is the referee document. The original Standard can be obtained online at http://www.opengroup.org/unix/online.html . Any typographical or formatting errors that appear in this page are most likely to have been introduced during the conversion of the source files to man page format. To report such errors, see https://www.kernel.org/doc/man-pages/reporting_bugs.html . IEEE/The Open Group 2017 BATCH(1P) Pages that refer to this page: at(1p) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # batch\n\n> Execute commands at a later time when the system load levels permit.\n> Service atd (or atrun) should be running for the actual executions.\n> More information: <https://manned.org/batch>.\n\n- Execute commands from `stdin` (press `Ctrl + D` when done):\n\n`batch`\n\n- Execute a command from `stdin`:\n\n`echo "{{./make_db_backup.sh}}" | batch`\n\n- Execute commands from a given [f]ile:\n\n`batch -f {{path/to/file}}`\n |
bc | bc(1p) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training bc(1p) Linux manual page PROLOG | NAME | SYNOPSIS | DESCRIPTION | OPTIONS | OPERANDS | STDIN | INPUT FILES | ENVIRONMENT VARIABLES | ASYNCHRONOUS EVENTS | STDOUT | STDERR | OUTPUT FILES | EXTENDED DESCRIPTION | EXIT STATUS | CONSEQUENCES OF ERRORS | APPLICATION USAGE | EXAMPLES | RATIONALE | FUTURE DIRECTIONS | SEE ALSO | COPYRIGHT BC(1P) POSIX Programmer's Manual BC(1P) PROLOG top This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux. NAME top bc arbitrary-precision arithmetic language SYNOPSIS top bc [-l] [file...] DESCRIPTION top The bc utility shall implement an arbitrary precision calculator. It shall take input from any files given, then read from the standard input. If the standard input and standard output to bc are attached to a terminal, the invocation of bc shall be considered to be interactive, causing behavioral constraints described in the following sections. OPTIONS top The bc utility shall conform to the Base Definitions volume of POSIX.12017, Section 12.2, Utility Syntax Guidelines. The following option shall be supported: -l (The letter ell.) Define the math functions and initialize scale to 20, instead of the default zero; see the EXTENDED DESCRIPTION section. OPERANDS top The following operand shall be supported: file A pathname of a text file containing bc program statements. After all files have been read, bc shall read the standard input. STDIN top See the INPUT FILES section. INPUT FILES top Input files shall be text files containing a sequence of comments, statements, and function definitions that shall be executed as they are read. ENVIRONMENT VARIABLES top The following environment variables shall affect the execution of bc: LANG Provide a default value for the internationalization variables that are unset or null. (See the Base Definitions volume of POSIX.12017, Section 8.2, Internationalization Variables for the precedence of internationalization variables used to determine the values of locale categories.) LC_ALL If set to a non-empty string value, override the values of all the other internationalization variables. LC_CTYPE Determine the locale for the interpretation of sequences of bytes of text data as characters (for example, single-byte as opposed to multi-byte characters in arguments and input files). LC_MESSAGES Determine the locale that should be used to affect the format and contents of diagnostic messages written to standard error. NLSPATH Determine the location of message catalogs for the processing of LC_MESSAGES. ASYNCHRONOUS EVENTS top Default. STDOUT top The output of the bc utility shall be controlled by the program read, and consist of zero or more lines containing the value of all executed expressions without assignments. The radix and precision of the output shall be controlled by the values of the obase and scale variables; see the EXTENDED DESCRIPTION section. STDERR top The standard error shall be used only for diagnostic messages. OUTPUT FILES top None. EXTENDED DESCRIPTION top Grammar The grammar in this section and the lexical conventions in the following section shall together describe the syntax for bc programs. The general conventions for this style of grammar are described in Section 1.3, Grammar Conventions. A valid program can be represented as the non-terminal symbol program in the grammar. This formal syntax shall take precedence over the text syntax description. %token EOF NEWLINE STRING LETTER NUMBER %token MUL_OP /* '*', '/', '%' */ %token ASSIGN_OP /* '=', '+=', '-=', '*=', '/=', '%=', '^=' */ %token REL_OP /* '==', '<=', '>=', '!=', '<', '>' */ %token INCR_DECR /* '++', '--' */ %token Define Break Quit Length /* 'define', 'break', 'quit', 'length' */ %token Return For If While Sqrt /* 'return', 'for', 'if', 'while', 'sqrt' */ %token Scale Ibase Obase Auto /* 'scale', 'ibase', 'obase', 'auto' */ %start program %% program : EOF | input_item program ; input_item : semicolon_list NEWLINE | function ; semicolon_list : /* empty */ | statement | semicolon_list ';' statement | semicolon_list ';' ; statement_list : /* empty */ | statement | statement_list NEWLINE | statement_list NEWLINE statement | statement_list ';' | statement_list ';' statement ; statement : expression | STRING | Break | Quit | Return | Return '(' return_expression ')' | For '(' expression ';' relational_expression ';' expression ')' statement | If '(' relational_expression ')' statement | While '(' relational_expression ')' statement | '{' statement_list '}' ; function : Define LETTER '(' opt_parameter_list ')' '{' NEWLINE opt_auto_define_list statement_list '}' ; opt_parameter_list : /* empty */ | parameter_list ; parameter_list : LETTER | define_list ',' LETTER ; opt_auto_define_list : /* empty */ | Auto define_list NEWLINE | Auto define_list ';' ; define_list : LETTER | LETTER '[' ']' | define_list ',' LETTER | define_list ',' LETTER '[' ']' ; opt_argument_list : /* empty */ | argument_list ; argument_list : expression | LETTER '[' ']' ',' argument_list ; relational_expression : expression | expression REL_OP expression ; return_expression : /* empty */ | expression ; expression : named_expression | NUMBER | '(' expression ')' | LETTER '(' opt_argument_list ')' | '-' expression | expression '+' expression | expression '-' expression | expression MUL_OP expression | expression '^' expression | INCR_DECR named_expression | named_expression INCR_DECR | named_expression ASSIGN_OP expression | Length '(' expression ')' | Sqrt '(' expression ')' | Scale '(' expression ')' ; named_expression : LETTER | LETTER '[' expression ']' | Scale | Ibase | Obase ; Lexical Conventions in bc The lexical conventions for bc programs, with respect to the preceding grammar, shall be as follows: 1. Except as noted, bc shall recognize the longest possible token or delimiter beginning at a given point. 2. A comment shall consist of any characters beginning with the two adjacent characters "/*" and terminated by the next occurrence of the two adjacent characters "*/". Comments shall have no effect except to delimit lexical tokens. 3. The <newline> shall be recognized as the token NEWLINE. 4. The token STRING shall represent a string constant; it shall consist of any characters beginning with the double-quote character ('"') and terminated by another occurrence of the double-quote character. The value of the string is the sequence of all characters between, but not including, the two double-quote characters. All characters shall be taken literally from the input, and there is no way to specify a string containing a double-quote character. The length of the value of each string shall be limited to {BC_STRING_MAX} bytes. 5. A <blank> shall have no effect except as an ordinary character if it appears within a STRING token, or to delimit a lexical token other than STRING. 6. The combination of a <backslash> character immediately followed by a <newline> shall have no effect other than to delimit lexical tokens with the following exceptions: * It shall be interpreted as the character sequence "\<newline>" in STRING tokens. * It shall be ignored as part of a multi-line NUMBER token. 7. The token NUMBER shall represent a numeric constant. It shall be recognized by the following grammar: NUMBER : integer | '.' integer | integer '.' | integer '.' integer ; integer : digit | integer digit ; digit : 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | A | B | C | D | E | F ; 8. The value of a NUMBER token shall be interpreted as a numeral in the base specified by the value of the internal register ibase (described below). Each of the digit characters shall have the value from 0 to 15 in the order listed here, and the <period> character shall represent the radix point. The behavior is undefined if digits greater than or equal to the value of ibase appear in the token. However, note the exception for single-digit values being assigned to ibase and obase themselves, in Operations in bc. 9. The following keywords shall be recognized as tokens: auto ibase length return while break if obase scale define for quit sqrt 10. Any of the following characters occurring anywhere except within a keyword shall be recognized as the token LETTER: a b c d e f g h i j k l m n o p q r s t u v w x y z 11. The following single-character and two-character sequences shall be recognized as the token ASSIGN_OP: = += -= *= /= %= ^= 12. If an '=' character, as the beginning of a token, is followed by a '-' character with no intervening delimiter, the behavior is undefined. 13. The following single-characters shall be recognized as the token MUL_OP: * / % 14. The following single-character and two-character sequences shall be recognized as the token REL_OP: == <= >= != < > 15. The following two-character sequences shall be recognized as the token INCR_DECR: ++ -- 16. The following single characters shall be recognized as tokens whose names are the character: <newline> ( ) , + - ; [ ] ^ { } 17. The token EOF is returned when the end of input is reached. Operations in bc There are three kinds of identifiers: ordinary identifiers, array identifiers, and function identifiers. All three types consist of single lowercase letters. Array identifiers shall be followed by square brackets ("[]"). An array subscript is required except in an argument or auto list. Arrays are singly dimensioned and can contain up to {BC_DIM_MAX} elements. Indexing shall begin at zero so an array is indexed from 0 to {BC_DIM_MAX}-1. Subscripts shall be truncated to integers. The application shall ensure that function identifiers are followed by parentheses, possibly enclosing arguments. The three types of identifiers do not conflict. The following table summarizes the rules for precedence and associativity of all operators. Operators on the same line shall have the same precedence; rows are in order of decreasing precedence. Table: Operators in bc Operator Associativity ++, -- N/A unary - N/A ^ Right to left *, /, % Left to right +, binary - Left to right =, +=, -=, *=, /=, %=, ^= Right to left ==, <=, >=, !=, <, > None Each expression or named expression has a scale, which is the number of decimal digits that shall be maintained as the fractional portion of the expression. Named expressions are places where values are stored. Named expressions shall be valid on the left side of an assignment. The value of a named expression shall be the value stored in the place named. Simple identifiers and array elements are named expressions; they have an initial value of zero and an initial scale of zero. The internal registers scale, ibase, and obase are all named expressions. The scale of an expression consisting of the name of one of these registers shall be zero; values assigned to any of these registers are truncated to integers. The scale register shall contain a global value used in computing the scale of expressions (as described below). The value of the register scale is limited to 0 scale {BC_SCALE_MAX} and shall have a default value of zero. The ibase and obase registers are the input and output number radix, respectively. The value of ibase shall be limited to: 2 ibase 16 The value of obase shall be limited to: 2 obase {BC_BASE_MAX} When either ibase or obase is assigned a single digit value from the list in Lexical Conventions in bc, the value shall be assumed in hexadecimal. (For example, ibase=A sets to base ten, regardless of the current ibase value.) Otherwise, the behavior is undefined when digits greater than or equal to the value of ibase appear in the input. Both ibase and obase shall have initial values of 10. Internal computations shall be conducted as if in decimal, regardless of the input and output bases, to the specified number of decimal digits. When an exact result is not achieved (for example, scale=0; 3.2/1), the result shall be truncated. For all values of obase specified by this volume of POSIX.12017, bc shall output numeric values by performing each of the following steps in order: 1. If the value is less than zero, a <hyphen-minus> ('-') character shall be output. 2. One of the following is output, depending on the numerical value: * If the absolute value of the numerical value is greater than or equal to one, the integer portion of the value shall be output as a series of digits appropriate to obase (as described below), most significant digit first. The most significant non-zero digit shall be output next, followed by each successively less significant digit. * If the absolute value of the numerical value is less than one but greater than zero and the scale of the numerical value is greater than zero, it is unspecified whether the character 0 is output. * If the numerical value is zero, the character 0 shall be output. 3. If the scale of the value is greater than zero and the numeric value is not zero, a <period> character shall be output, followed by a series of digits appropriate to obase (as described below) representing the most significant portion of the fractional part of the value. If s represents the scale of the value being output, the number of digits output shall be s if obase is 10, less than or equal to s if obase is greater than 10, or greater than or equal to s if obase is less than 10. For obase values other than 10, this should be the number of digits needed to represent a precision of 10s. For obase values from 2 to 16, valid digits are the first obase of the single characters: 0 1 2 3 4 5 6 7 8 9 A B C D E F which represent the values zero to 15, inclusive, respectively. For bases greater than 16, each digit shall be written as a separate multi-digit decimal number. Each digit except the most significant fractional digit shall be preceded by a single <space>. For bases from 17 to 100, bc shall write two-digit decimal numbers; for bases from 101 to 1000, three-digit decimal strings, and so on. For example, the decimal number 1024 in base 25 would be written as: 01 15 24 and in base 125, as: 008 024 Very large numbers shall be split across lines with 70 characters per line in the POSIX locale; other locales may split at different character boundaries. Lines that are continued shall end with a <backslash>. A function call shall consist of a function name followed by parentheses containing a <comma>-separated list of expressions, which are the function arguments. A whole array passed as an argument shall be specified by the array name followed by empty square brackets. All function arguments shall be passed by value. As a result, changes made to the formal parameters shall have no effect on the actual arguments. If the function terminates by executing a return statement, the value of the function shall be the value of the expression in the parentheses of the return statement or shall be zero if no expression is provided or if there is no return statement. The result of sqrt(expression) shall be the square root of the expression. The result shall be truncated in the least significant decimal place. The scale of the result shall be the scale of the expression or the value of scale, whichever is larger. The result of length(expression) shall be the total number of significant decimal digits in the expression. The scale of the result shall be zero. The result of scale(expression) shall be the scale of the expression. The scale of the result shall be zero. A numeric constant shall be an expression. The scale shall be the number of digits that follow the radix point in the input representing the constant, or zero if no radix point appears. The sequence ( expression ) shall be an expression with the same value and scale as expression. The parentheses can be used to alter the normal precedence. The semantics of the unary and binary operators are as follows: -expression The result shall be the negative of the expression. The scale of the result shall be the scale of expression. The unary increment and decrement operators shall not modify the scale of the named expression upon which they operate. The scale of the result shall be the scale of that named expression. ++named-expression The named expression shall be incremented by one. The result shall be the value of the named expression after incrementing. --named-expression The named expression shall be decremented by one. The result shall be the value of the named expression after decrementing. named-expression++ The named expression shall be incremented by one. The result shall be the value of the named expression before incrementing. named-expression-- The named expression shall be decremented by one. The result shall be the value of the named expression before decrementing. The exponentiation operator, <circumflex> ('^'), shall bind right to left. expression^expression The result shall be the first expression raised to the power of the second expression. If the second expression is not an integer, the behavior is undefined. If a is the scale of the left expression and b is the absolute value of the right expression, the scale of the result shall be: if b >= 0 min(a * b, max(scale, a)) if b < 0 scale The multiplicative operators ('*', '/', '%') shall bind left to right. expression*expression The result shall be the product of the two expressions. If a and b are the scales of the two expressions, then the scale of the result shall be: min(a+b,max(scale,a,b)) expression/expression The result shall be the quotient of the two expressions. The scale of the result shall be the value of scale. expression%expression For expressions a and b, a%b shall be evaluated equivalent to the steps: 1. Compute a/b to current scale. 2. Use the result to compute: a - (a / b) * b to scale: max(scale + scale(b), scale(a)) The scale of the result shall be: max(scale + scale(b), scale(a)) When scale is zero, the '%' operator is the mathematical remainder operator. The additive operators ('+', '-') shall bind left to right. expression+expression The result shall be the sum of the two expressions. The scale of the result shall be the maximum of the scales of the expressions. expression-expression The result shall be the difference of the two expressions. The scale of the result shall be the maximum of the scales of the expressions. The assignment operators ('=', "+=", "-=", "*=", "/=", "%=", "^=") shall bind right to left. named-expression=expression This expression shall result in assigning the value of the expression on the right to the named expression on the left. The scale of both the named expression and the result shall be the scale of expression. The compound assignment forms: named-expression <operator>= expression shall be equivalent to: named-expression=named-expression <operator> expression except that the named-expression shall be evaluated only once. Unlike all other operators, the relational operators ('<', '>', "<=", ">=", "==", "!=") shall be only valid as the object of an if, while, or inside a for statement. expression1<expression2 The relation shall be true if the value of expression1 is strictly less than the value of expression2. expression1>expression2 The relation shall be true if the value of expression1 is strictly greater than the value of expression2. expression1<=expression2 The relation shall be true if the value of expression1 is less than or equal to the value of expression2. expression1>=expression2 The relation shall be true if the value of expression1 is greater than or equal to the value of expression2. expression1==expression2 The relation shall be true if the values of expression1 and expression2 are equal. expression1!=expression2 The relation shall be true if the values of expression1 and expression2 are unequal. There are only two storage classes in bc: global and automatic (local). Only identifiers that are local to a function need be declared with the auto command. The arguments to a function shall be local to the function. All other identifiers are assumed to be global and available to all functions. All identifiers, global and local, have initial values of zero. Identifiers declared as auto shall be allocated on entry to the function and released on returning from the function. They therefore do not retain values between function calls. Auto arrays shall be specified by the array name followed by empty square brackets. On entry to a function, the old values of the names that appear as parameters and as automatic variables shall be pushed onto a stack. Until the function returns, reference to these names shall refer only to the new values. References to any of these names from other functions that are called from this function also refer to the new value until one of those functions uses the same name for a local variable. When a statement is an expression, unless the main operator is an assignment, execution of the statement shall write the value of the expression followed by a <newline>. When a statement is a string, execution of the statement shall write the value of the string. Statements separated by <semicolon> or <newline> characters shall be executed sequentially. In an interactive invocation of bc, each time a <newline> is read that satisfies the grammatical production: input_item : semicolon_list NEWLINE the sequential list of statements making up the semicolon_list shall be executed immediately and any output produced by that execution shall be written without any delay due to buffering. In an if statement (if(relation) statement), the statement shall be executed if the relation is true. The while statement (while(relation) statement) implements a loop in which the relation is tested; each time the relation is true, the statement shall be executed and the relation retested. When the relation is false, execution shall resume after statement. A for statement(for(expression; relation; expression) statement) shall be the same as: first-expression while (relation) { statement last-expression } The application shall ensure that all three expressions are present. The break statement shall cause termination of a for or while statement. The auto statement (auto identifier [,identifier] ...) shall cause the values of the identifiers to be pushed down. The identifiers can be ordinary identifiers or array identifiers. Array identifiers shall be specified by following the array name by empty square brackets. The application shall ensure that the auto statement is the first statement in a function definition. A define statement: define LETTER ( opt_parameter_list ) { opt_auto_define_list statement_list } defines a function named LETTER. If a function named LETTER was previously defined, the define statement shall replace the previous definition. The expression: LETTER ( opt_argument_list ) shall invoke the function named LETTER. The behavior is undefined if the number of arguments in the invocation does not match the number of parameters in the definition. Functions shall be defined before they are invoked. A function shall be considered to be defined within its own body, so recursive calls are valid. The values of numeric constants within a function shall be interpreted in the base specified by the value of the ibase register when the function is invoked. The return statements (return and return(expression)) shall cause termination of a function, popping of its auto variables, and specification of the result of the function. The first form shall be equivalent to return(0). The value and scale of the result returned by the function shall be the value and scale of the expression returned. The quit statement (quit) shall stop execution of a bc program at the point where the statement occurs in the input, even if it occurs in a function definition, or in an if, for, or while statement. The following functions shall be defined when the -l option is specified: s( expression ) Sine of argument in radians. c( expression ) Cosine of argument in radians. a( expression ) Arctangent of argument. l( expression ) Natural logarithm of argument. e( expression ) Exponential function of argument. j( expression1, expression2 ) Bessel function of expression2 of the first kind of integer order expression1. The scale of the result returned by these functions shall be the value of the scale register at the time the function is invoked. The value of the scale register after these functions have completed their execution shall be the same value it had upon invocation. The behavior is undefined if any of these functions is invoked with an argument outside the domain of the mathematical function. EXIT STATUS top The following exit values shall be returned: 0 All input files were processed successfully. unspecified An error occurred. CONSEQUENCES OF ERRORS top If any file operand is specified and the named file cannot be accessed, bc shall write a diagnostic message to standard error and terminate without any further action. In an interactive invocation of bc, the utility should print an error message and recover following any error in the input. In a non-interactive invocation of bc, invalid input causes undefined behavior. The following sections are informative. APPLICATION USAGE top Automatic variables in bc do not work in exactly the same way as in either C or PL/1. For historical reasons, the exit status from bc cannot be relied upon to indicate that an error has occurred. Returning zero after an error is possible. Therefore, bc should be used primarily by interactive users (who can react to error messages) or by application programs that can somehow validate the answers returned as not including error messages. The bc utility always uses the <period> ('.') character to represent a radix point, regardless of any decimal-point character specified as part of the current locale. In languages like C or awk, the <period> character is used in program source, so it can be portable and unambiguous, while the locale-specific character is used in input and output. Because there is no distinction between source and input in bc, this arrangement would not be possible. Using the locale-specific character in bc's input would introduce ambiguities into the language; consider the following example in a locale with a <comma> as the decimal-point character: define f(a,b) { ... } ... f(1,2,3) Because of such ambiguities, the <period> character is used in input. Having input follow different conventions from output would be confusing in either pipeline usage or interactive usage, so the <period> is also used in output. EXAMPLES top In the shell, the following assigns an approximation of the first ten digits of '' to the variable x: x=$(printf "%s\n" 'scale = 10; 104348/33215' | bc) The following bc program prints the same approximation of '', with a label, to standard output: scale = 10 "pi equals " 104348 / 33215 The following defines a function to compute an approximate value of the exponential function (note that such a function is predefined if the -l option is specified): scale = 20 define e(x){ auto a, b, c, i, s a = 1 b = 1 s = 1 for (i = 1; 1 == 1; i++){ a = a*x b = b*i c = a/b if (c == 0) { return(s) } s = s+c } } The following prints approximate values of the exponential function of the first ten integers: for (i = 1; i <= 10; ++i) { e(i) } RATIONALE top The bc utility is implemented historically as a front-end processor for dc; dc was not selected to be part of this volume of POSIX.12017 because bc was thought to have a more intuitive programmatic interface. Current implementations that implement bc using dc are expected to be compliant. The exit status for error conditions has been left unspecified for several reasons: * The bc utility is used in both interactive and non- interactive situations. Different exit codes may be appropriate for the two uses. * It is unclear when a non-zero exit should be given; divide- by-zero, undefined functions, and syntax errors are all possibilities. * It is not clear what utility the exit status has. * In the 4.3 BSD, System V, and Ninth Edition implementations, bc works in conjunction with dc. The dc utility is the parent, bc is the child. This was done to cleanly terminate bc if dc aborted. The decision to have bc exit upon encountering an inaccessible input file is based on the belief that bc file1 file2 is used most often when at least file1 contains data/function declarations/initializations. Having bc continue with prerequisite files missing is probably not useful. There is no implication in the CONSEQUENCES OF ERRORS section that bc must check all its files for accessibility before opening any of them. There was considerable debate on the appropriateness of the language accepted by bc. Several reviewers preferred to see either a pure subset of the C language or some changes to make the language more compatible with C. While the bc language has some obvious similarities to C, it has never claimed to be compatible with any version of C. An interpreter for a subset of C might be a very worthwhile utility, and it could potentially make bc obsolete. However, no such utility is known in historical practice, and it was not within the scope of this volume of POSIX.12017 to define such a language and utility. If and when they are defined, it may be appropriate to include them in a future version of this standard. This left the following alternatives: 1. Exclude any calculator language from this volume of POSIX.12017. The consensus of the standard developers was that a simple programmatic calculator language is very useful for both applications and interactive users. The only arguments for excluding any calculator were that it would become obsolete if and when a C-compatible one emerged, or that the absence would encourage the development of such a C-compatible one. These arguments did not sufficiently address the needs of current application developers. 2. Standardize the historical dc, possibly with minor modifications. The consensus of the standard developers was that dc is a fundamentally less usable language and that that would be far too severe a penalty for avoiding the issue of being similar to but incompatible with C. 3. Standardize the historical bc, possibly with minor modifications. This was the approach taken. Most of the proponents of changing the language would not have been satisfied until most or all of the incompatibilities with C were resolved. Since most of the changes considered most desirable would break historical applications and require significant modification to historical implementations, almost no modifications were made. The one significant modification that was made was the replacement of the historical bc assignment operators "=+", and so on, with the more modern "+=", and so on. The older versions are considered to be fundamentally flawed because of the lexical ambiguity in uses like a=-1. In order to permit implementations to deal with backwards- compatibility as they see fit, the behavior of this one ambiguous construct was made undefined. (At least three implementations have been known to support this change already, so the degree of change involved should not be great.) The '%' operator is the mathematical remainder operator when scale is zero. The behavior of this operator for other values of scale is from historical implementations of bc, and has been maintained for the sake of historical applications despite its non-intuitive nature. Historical implementations permit setting ibase and obase to a broader range of values. This includes values less than 2, which were not seen as sufficiently useful to standardize. These implementations do not interpret input properly for values of ibase that are greater than 16. This is because numeric constants are recognized syntactically, rather than lexically, as described in this volume of POSIX.12017. They are built from lexical tokens of single hexadecimal digits and <period> characters. Since <blank> characters between tokens are not visible at the syntactic level, it is not possible to recognize the multi-digit ``digits'' used in the higher bases properly. The ability to recognize input in these bases was not considered useful enough to require modifying these implementations. Note that the recognition of numeric constants at the syntactic level is not a problem with conformance to this volume of POSIX.12017, as it does not impact the behavior of conforming applications (and correct bc programs). Historical implementations also accept input with all of the digits '0'-'9' and 'A'-'F' regardless of the value of ibase; since digits with value greater than or equal to ibase are not really appropriate, the behavior when they appear is undefined, except for the common case of: ibase=8; /* Process in octal base. */ ... ibase=A /* Restore decimal base. */ In some historical implementations, if the expression to be written is an uninitialized array element, a leading <space> and/or up to four leading 0 characters may be output before the character zero. This behavior is considered a bug; it is unlikely that any currently conforming application relies on: echo 'b[3]' | bc returning 00000 rather than 0. Exact calculation of the number of fractional digits to output for a given value in a base other than 10 can be computationally expensive. Historical implementations use a faster approximation, and this is permitted. Note that the requirements apply only to values of obase that this volume of POSIX.12017 requires implementations to support (in particular, not to 1, 0, or negative bases, if an implementation supports them as an extension). Historical implementations of bc did not allow array parameters to be passed as the last parameter to a function. New implementations are encouraged to remove this restriction even though it is not required by the grammar. FUTURE DIRECTIONS top None. SEE ALSO top Section 1.3, Grammar Conventions, awk(1p) The Base Definitions volume of POSIX.12017, Chapter 8, Environment Variables, Section 12.2, Utility Syntax Guidelines COPYRIGHT top Portions of this text are reprinted and reproduced in electronic form from IEEE Std 1003.1-2017, Standard for Information Technology -- Portable Operating System Interface (POSIX), The Open Group Base Specifications Issue 7, 2018 Edition, Copyright (C) 2018 by the Institute of Electrical and Electronics Engineers, Inc and The Open Group. In the event of any discrepancy between this version and the original IEEE and The Open Group Standard, the original IEEE and The Open Group Standard is the referee document. The original Standard can be obtained online at http://www.opengroup.org/unix/online.html . Any typographical or formatting errors that appear in this page are most likely to have been introduced during the conversion of the source files to man page format. To report such errors, see https://www.kernel.org/doc/man-pages/reporting_bugs.html . IEEE/The Open Group 2017 BC(1P) Pages that refer to this page: printf(1p) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # bc\n\n> An arbitrary precision calculator language.\n> See also: `dc`.\n> More information: <https://manned.org/man/bc.1>.\n\n- Start an interactive session:\n\n`bc`\n\n- Start an interactive session with the standard math library enabled:\n\n`bc --mathlib`\n\n- Calculate an expression:\n\n`echo '{{5 / 3}}' | bc`\n\n- Execute a script:\n\n`bc {{path/to/script.bc}}`\n\n- Calculate an expression with the specified scale:\n\n`echo 'scale = {{10}}; {{5 / 3}}' | bc`\n\n- Calculate a sine/cosine/arctangent/natural logarithm/exponential function using `mathlib`:\n\n`echo '{{s|c|a|l|e}}({{1}})' | bc --mathlib`\n |
bg | bg(1p) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training bg(1p) Linux manual page PROLOG | NAME | SYNOPSIS | DESCRIPTION | OPTIONS | OPERANDS | STDIN | INPUT FILES | ENVIRONMENT VARIABLES | ASYNCHRONOUS EVENTS | STDOUT | STDERR | OUTPUT FILES | EXTENDED DESCRIPTION | EXIT STATUS | CONSEQUENCES OF ERRORS | APPLICATION USAGE | EXAMPLES | RATIONALE | FUTURE DIRECTIONS | SEE ALSO | COPYRIGHT BG(1P) POSIX Programmer's Manual BG(1P) PROLOG top This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux. NAME top bg run jobs in the background SYNOPSIS top bg [job_id...] DESCRIPTION top If job control is enabled (see the description of set -m), the bg utility shall resume suspended jobs from the current environment (see Section 2.12, Shell Execution Environment) by running them as background jobs. If the job specified by job_id is already a running background job, the bg utility shall have no effect and shall exit successfully. Using bg to place a job into the background shall cause its process ID to become ``known in the current shell execution environment'', as if it had been started as an asynchronous list; see Section 2.9.3.1, Examples. OPTIONS top None. OPERANDS top The following operand shall be supported: job_id Specify the job to be resumed as a background job. If no job_id operand is given, the most recently suspended job shall be used. The format of job_id is described in the Base Definitions volume of POSIX.12017, Section 3.204, Job Control Job ID. STDIN top Not used. INPUT FILES top None. ENVIRONMENT VARIABLES top The following environment variables shall affect the execution of bg: LANG Provide a default value for the internationalization variables that are unset or null. (See the Base Definitions volume of POSIX.12017, Section 8.2, Internationalization Variables for the precedence of internationalization variables used to determine the values of locale categories.) LC_ALL If set to a non-empty string value, override the values of all the other internationalization variables. LC_CTYPE Determine the locale for the interpretation of sequences of bytes of text data as characters (for example, single-byte as opposed to multi-byte characters in arguments). LC_MESSAGES Determine the locale that should be used to affect the format and contents of diagnostic messages written to standard error. NLSPATH Determine the location of message catalogs for the processing of LC_MESSAGES. ASYNCHRONOUS EVENTS top Default. STDOUT top The output of bg shall consist of a line in the format: "[%d] %s\n", <job-number>, <command> where the fields are as follows: <job-number> A number that can be used to identify the job to the wait, fg, and kill utilities. Using these utilities, the job can be identified by prefixing the job number with '%'. <command> The associated command that was given to the shell. STDERR top The standard error shall be used only for diagnostic messages. OUTPUT FILES top None. EXTENDED DESCRIPTION top None. EXIT STATUS top The following exit values shall be returned: 0 Successful completion. >0 An error occurred. CONSEQUENCES OF ERRORS top If job control is disabled, the bg utility shall exit with an error and no job shall be placed in the background. The following sections are informative. APPLICATION USAGE top A job is generally suspended by typing the SUSP character (<control>Z on most systems); see the Base Definitions volume of POSIX.12017, Chapter 11, General Terminal Interface. At that point, bg can put the job into the background. This is most effective when the job is expecting no terminal input and its output has been redirected to non-terminal files. A background job can be forced to stop when it has terminal output by issuing the command: stty tostop A background job can be stopped with the command: kill -s stop job ID The bg utility does not work as expected when it is operating in its own utility execution environment because that environment has no suspended jobs. In the following examples: ... | xargs bg (bg) each bg operates in a different environment and does not share its parent shell's understanding of jobs. For this reason, bg is generally implemented as a shell regular built-in. EXAMPLES top None. RATIONALE top The extensions to the shell specified in this volume of POSIX.12017 have mostly been based on features provided by the KornShell. The job control features provided by bg, fg, and jobs are also based on the KornShell. The standard developers examined the characteristics of the C shell versions of these utilities and found that differences exist. Despite widespread use of the C shell, the KornShell versions were selected for this volume of POSIX.12017 to maintain a degree of uniformity with the rest of the KornShell features selected (such as the very popular command line editing features). The bg utility is expected to wrap its output if the output exceeds the number of display columns. FUTURE DIRECTIONS top None. SEE ALSO top Section 2.9.3.1, Examples, fg(1p), kill(1p), jobs(1p), wait(1p) The Base Definitions volume of POSIX.12017, Section 3.204, Job Control Job ID, Chapter 8, Environment Variables, Chapter 11, General Terminal Interface COPYRIGHT top Portions of this text are reprinted and reproduced in electronic form from IEEE Std 1003.1-2017, Standard for Information Technology -- Portable Operating System Interface (POSIX), The Open Group Base Specifications Issue 7, 2018 Edition, Copyright (C) 2018 by the Institute of Electrical and Electronics Engineers, Inc and The Open Group. In the event of any discrepancy between this version and the original IEEE and The Open Group Standard, the original IEEE and The Open Group Standard is the referee document. The original Standard can be obtained online at http://www.opengroup.org/unix/online.html . Any typographical or formatting errors that appear in this page are most likely to have been introduced during the conversion of the source files to man page format. To report such errors, see https://www.kernel.org/doc/man-pages/reporting_bugs.html . IEEE/The Open Group 2017 BG(1P) Pages that refer to this page: fg(1p), jobs(1p) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # bg\n\n> Resume suspended jobs (e.g. using `Ctrl + Z`), and keeps them running in the background.\n> More information: <https://manned.org/bg>.\n\n- Resume the most recently suspended job and run it in the background:\n\n`bg`\n\n- Resume a specific job (use `jobs -l` to get its ID) and run it in the background:\n\n`bg %{{job_id}}`\n |
bind | bind(1): bash built-in commands, see bash - Linux man page bind(1) - Linux man page Name bash, :, ., [, alias, bg, bind, break, builtin, caller, cd, command, compgen, complete, compopt, continue, declare, dirs, disown, echo, enable, eval, exec, exit, export, false, fc, fg, getopts, hash, help, history, jobs, kill, let, local, logout, mapfile, popd, printf, pushd, pwd, read, readonly, return, set, shift, shopt, source, suspend, test, times, trap, true, type, typeset, ulimit, umask, unalias, unset, wait - bash built-in commands, see bash(1) Bash Builtin Commands See Also bash(1), sh(1) Site Search Library linux docs linux man pages page load time Toys world sunlight moon phase trace explorer |
# bind
> Manipulate key bindings and readline behavior in Bash.
> More information: <https://www.gnu.org/software/bash/manual/bash.html#Bindable-Readline-Commands>.
- Display all current key bindings:
`bind -p`
- Set a key binding (e.g., set `Ctrl + k` to execute `kill-line`):
`bind '"\C-k":kill-line'`
- Set a key binding to execute multiple commands (e.g., `Ctrl + l` to clear the screen and display the prompt):
`bind '"\C-l":clear-screen; echo; command'`
- Remove a key binding (e.g., remove the binding for `Ctrl + x`):
`bind -r '\C-x'`
- Display key bindings in a readable format:
`bind -S`
- Execute commands from a file as if they were typed at the keyboard:
`bind -f {{path/to/file}}`
|
blkdiscard | blkdiscard(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training blkdiscard(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | EXIT STATUS | AUTHORS | SEE ALSO | REPORTING BUGS | AVAILABILITY BLKDISCARD(8) System Administration BLKDISCARD(8) NAME top blkdiscard - discard sectors on a device SYNOPSIS top blkdiscard [options] [-o offset] [-l length] device DESCRIPTION top blkdiscard is used to discard device sectors. This is useful for solid-state drivers (SSDs) and thinly-provisioned storage. Unlike fstrim(8), this command is used directly on the block device. By default, blkdiscard will discard all blocks on the device. Options may be used to modify this behavior based on range or size, as explained below. The device argument is the pathname of the block device. WARNING: All data in the discarded region on the device will be lost! OPTIONS top The offset and length arguments may be followed by the multiplicative suffixes KiB (=1024), MiB (=1024*1024), and so on for GiB, TiB, PiB, EiB, ZiB and YiB (the "iB" is optional, e.g., "K" has the same meaning as "KiB") or the suffixes KB (=1000), MB (=1000*1000), and so on for GB, TB, PB, EB, ZB and YB. -f, --force Disable all checking. Since v2.36 the block device is open in exclusive mode (O_EXCL) by default to avoid collision with mounted filesystem or another kernel subsystem. The --force option disables the exclusive access mode. -o, --offset offset Byte offset into the device from which to start discarding. The provided value must be aligned to the device sector size. The default value is zero. -l, --length length The number of bytes to discard (counting from the starting point). The provided value must be aligned to the device sector size. If the specified value extends past the end of the device, blkdiscard will stop at the device size boundary. The default value extends to the end of the device. -p, --step length The number of bytes to discard within one iteration. The default is to discard all by one ioctl call. -q, --quiet Suppress warning messages. -s, --secure Perform a secure discard. A secure discard is the same as a regular discard except that all copies of the discarded blocks that were possibly created by garbage collection must also be erased. This requires support from the device. -z, --zeroout Zero-fill rather than discard. -v, --verbose Display the aligned values of offset and length. If the --step option is specified, it prints the discard progress every second. -h, --help Display help text and exit. -V, --version Print version and exit. EXIT STATUS top blkdiscard has the following exit status values: 0 success 1 failure; incorrect invocation, permissions or any other generic error 2 failure; since v2.39, the device does not support discard operation AUTHORS top Lukas Czerner <lczerner@redhat.com>, Karel Zak <kzak@redhat.com> SEE ALSO top fstrim(8) REPORTING BUGS top For bug reports, use the issue tracker at https://github.com/util-linux/util-linux/issues. AVAILABILITY top The blkdiscard command is part of the util-linux package which can be downloaded from Linux Kernel Archive <https://www.kernel.org/pub/linux/utils/util-linux/>. This page is part of the util-linux (a random collection of Linux utilities) project. Information about the project can be found at https://www.kernel.org/pub/linux/utils/util-linux/. If you have a bug report for this manual page, send it to util-linux@vger.kernel.org. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/utils/util-linux/util-linux.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-14.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org util-linux 2.39.594-1e0ad 2023-07-19 BLKDISCARD(8) Pages that refer to this page: fstrim(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # blkdiscard\n\n> Discards device sectors on storage devices. Useful for SSDs.\n> More information: <https://manned.org/blkdiscard>.\n\n- Discard all sectors on a device, removing all data:\n\n`blkdiscard /dev/{{device}}`\n\n- Securely discard all blocks on a device, removing all data:\n\n`blkdiscard --secure /dev/{{device}}`\n\n- Discard the first 100 MB of a device:\n\n`blkdiscard --length {{100MB}} /dev/{{device}}`\n |
blkid | blkid(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training blkid(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | EXIT STATUS | CONFIGURATION FILE | ENVIRONMENT | AUTHORS | SEE ALSO | REPORTING BUGS | AVAILABILITY BLKID(8) System Administration BLKID(8) NAME top blkid - locate/print block device attributes SYNOPSIS top blkid --label label | --uuid uuid blkid [--no-encoding --garbage-collect --list-one --cache-file file] [--output format] [--match-tag tag] [--match-token NAME=value] [device...] blkid --probe [--offset offset] [--output format] [--size size] [--match-tag tag] [--match-types list] [--usages list] [--no-part-details] device... blkid --info [--output format] [--match-tag tag] device... DESCRIPTION top The blkid program is the command-line interface to working with the libblkid(3) library. It can determine the type of content (e.g., filesystem or swap) that a block device holds, and also the attributes (tokens, NAME=value pairs) from the content metadata (e.g., LABEL or UUID fields). It is recommended to use lsblk(8) command to get information about block devices, or lsblk --fs to get an overview of filesystems, or findmnt(8) to search in already mounted filesystems. lsblk(8) provides more information, better control on output formatting, easy to use in scripts and it does not require root permissions to get actual information. blkid reads information directly from devices and for non-root users it returns cached unverified information. blkid is mostly designed for system services and to test libblkid(3) functionality. When device is specified, tokens from only this device are displayed. It is possible to specify multiple device arguments on the command line. If none is given, all partitions or unpartitioned devices which appear in /proc/partitions are shown, if they are recognized. blkid has two main forms of operation: either searching for a device with a specific NAME=value pair, or displaying NAME=value pairs for one or more specified devices. For security reasons blkid silently ignores all devices where the probing result is ambivalent (multiple colliding filesystems are detected). The low-level probing mode (-p) provides more information and extra exit status in this case. Its recommended to use wipefs(8) to get a detailed overview and to erase obsolete stuff (magic strings) from the device. OPTIONS top The size and offset arguments may be followed by the multiplicative suffixes like KiB (=1024), MiB (=1024*1024), and so on for GiB, TiB, PiB, EiB, ZiB and YiB (the "iB" is optional, e.g., "K" has the same meaning as "KiB"), or the suffixes KB (=1000), MB (=1000*1000), and so on for GB, TB, PB, EB, ZB and YB. -c, --cache-file cachefile Read from cachefile instead of reading from the default cache file (see the CONFIGURATION FILE section for more details). If you want to start with a clean cache (i.e., dont report devices previously scanned but not necessarily available at this time), specify /dev/null. -d, --no-encoding Dont encode non-printing characters. The non-printing characters are encoded by ^ and M- notation by default. Note that the --output udev output format uses a different encoding which cannot be disabled. -D, --no-part-details Dont print information (PART_ENTRY_* tags) from partition table in low-level probing mode. -g, --garbage-collect Perform a garbage collection pass on the blkid cache to remove devices which no longer exist. -H, --hint setting Set probing hint. The hints are an optional way to force probing functions to check, for example, another location. The currently supported is "session_offset=number" to set session offset on multi-session UDF. -i, --info Display information about I/O Limits (aka I/O topology). The 'export' output format is automatically enabled. This option can be used together with the --probe option. -k, --list-filesystems List all known filesystems and RAIDs and exit. -l, --list-one Look up only one device that matches the search parameter specified with the --match-token option. If there are multiple devices that match the specified search parameter, then the device with the highest priority is returned, and/or the first device found at a given priority (but see below note about udev). Device types in order of decreasing priority are: Device Mapper, EVMS, LVM, MD, and finally regular block devices. If this option is not specified, blkid will print all of the devices that match the search parameter. This option forces blkid to use udev when used for LABEL or UUID tokens in --match-token. The goal is to provide output consistent with other utils (like mount(8), etc.) on systems where the same tag is used for multiple devices. -L, --label label Look up the device that uses this filesystem label; this is equal to --list-one --output device --match-token LABEL=label. This lookup method is able to reliably use /dev/disk/by-label udev symlinks (dependent on a setting in /etc/blkid.conf). Avoid using the symlinks directly; it is not reliable to use the symlinks without verification. The --label option works on systems with and without udev. Unfortunately, the original blkid(8) from e2fsprogs uses the -L option as a synonym for -o list. For better portability, use -l -o device -t LABEL=label and -o list in your scripts rather than the -L option. -n, --match-types list Restrict the probing functions to the specified (comma-separated) list of superblock types (names). The list items may be prefixed with "no" to specify the types which should be ignored. For example: blkid --probe --match-types vfat,ext3,ext4 /dev/sda1 probes for vfat, ext3 and ext4 filesystems, and blkid --probe --match-types nominix /dev/sda1 probes for all supported formats except minix filesystems. This option is only useful together with --probe. -o, --output format Use the specified output format. Note that the order of variables and devices is not fixed. See also option -s. The format parameter may be: full print all tags (the default) value print the value of the tags list print the devices in a user-friendly format; this output format is unsupported for low-level probing (--probe or --info). This output format is DEPRECATED in favour of the lsblk(8) command. device print the device name only; this output format is always enabled for the --label and --uuid options udev print key="value" pairs for easy import into the udev environment; the keys are prefixed by ID_FS_ or ID_PART_ prefixes. The value may be modified to be safe for udev environment; allowed is plain ASCII, hex-escaping and valid UTF-8, everything else (including whitespaces) is replaced with '_'. The keys with _ENC postfix use hex-escaping for unsafe chars. The udev output returns the ID_FS_AMBIVALENT tag if more superblocks are detected, and ID_PART_ENTRY_* tags are always returned for all partitions including empty partitions. This output format is DEPRECATED. export print key=value pairs for easy import into the environment; this output format is automatically enabled when I/O Limits (--info option) are requested. The non-printing characters are encoded by ^ and M- notation and all potentially unsafe characters are escaped. -O, --offset offset Probe at the given offset (only useful with --probe). This option can be used together with the --info option. -p, --probe Switch to low-level superblock probing mode (bypassing the cache). Note that low-level probing also returns information about partition table type (PTTYPE tag) and partitions (PART_ENTRY_* tags). The tag names produced by low-level probing are based on names used internally by libblkid and it may be different than when executed without --probe (for example PART_ENTRY_UUID= vs PARTUUID=). See also --no-part-details. -s, --match-tag tag For each (specified) device, show only the tags that match tag. It is possible to specify multiple --match-tag options. If no tag is specified, then all tokens are shown for all (specified) devices. In order to just refresh the cache without showing any tokens, use --match-tag none with no other options. -S, --size size Override the size of device/file (only useful with --probe). -t, --match-token NAME=value Search for block devices with tokens named NAME that have the value value, and display any devices which are found. Common values for NAME include TYPE, LABEL, and UUID. If there are no devices specified on the command line, all block devices will be searched; otherwise only the specified devices are searched. -u, --usages list Restrict the probing functions to the specified (comma-separated) list of "usage" types. Supported usage types are: filesystem, raid, crypto and other. The list items may be prefixed with "no" to specify the usage types which should be ignored. For example: blkid --probe --usages filesystem,other /dev/sda1 probes for all filesystem and other (e.g., swap) formats, and blkid --probe --usages noraid /dev/sda1 probes for all supported formats except RAIDs. This option is only useful together with --probe. -U, --uuid uuid Look up the device that uses this filesystem uuid. For more details see the --label option. -h, --help Display help text and exit. -V, --version Print version and exit. EXIT STATUS top If the specified device or device addressed by specified token (option --match-token) was found and its possible to gather any information about the device, an exit status 0 is returned. Note the option --match-tag filters output tags, but it does not affect exit status. If the specified token was not found, or no (specified) devices could be identified, or it is impossible to gather any information about the device identifiers or device content an exit status of 2 is returned. For usage or other errors, an exit status of 4 is returned. If an ambivalent probing result was detected by low-level probing mode (-p), an exit status of 8 is returned. CONFIGURATION FILE top The standard location of the /etc/blkid.conf config file can be overridden by the environment variable BLKID_CONF. The following options control the libblkid library: SEND_UEVENT=<yes|not> Sends uevent when /dev/disk/by-{label,uuid,partuuid,partlabel}/ symlink does not match with LABEL, UUID, PARTUUID or PARTLABEL on the device. Default is "yes". CACHE_FILE=<path> Overrides the standard location of the cache file. This setting can be overridden by the environment variable BLKID_FILE. Default is /run/blkid/blkid.tab, or /etc/blkid.tab on systems without a /run directory. EVALUATE=<methods> Defines LABEL and UUID evaluation method(s). Currently, the libblkid library supports the "udev" and "scan" methods. More than one method may be specified in a comma-separated list. Default is "udev,scan". The "udev" method uses udev /dev/disk/by-* symlinks and the "scan" method scans all block devices from the /proc/partitions file. ENVIRONMENT top Setting LIBBLKID_DEBUG=all enables debug output. AUTHORS top blkid was written by Andreas Dilger for libblkid and improved by Theodore Tso and Karel Zak. SEE ALSO top libblkid(3), findfs(8), lsblk(8), wipefs(8) REPORTING BUGS top For bug reports, use the issue tracker at https://github.com/util-linux/util-linux/issues. AVAILABILITY top The blkid command is part of the util-linux package which can be downloaded from Linux Kernel Archive <https://www.kernel.org/pub/linux/utils/util-linux/>. This page is part of the util-linux (a random collection of Linux utilities) project. Information about the project can be found at https://www.kernel.org/pub/linux/utils/util-linux/. If you have a bug report for this manual page, send it to util-linux@vger.kernel.org. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/utils/util-linux/util-linux.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-14.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org util-linux 2.39.594-1e0ad 2023-07-19 BLKID(8) Pages that refer to this page: ioctl_fslabel(2), open_by_handle_at(2), libblkid(3), fstab(5), blkid(8), btrfs-device(8), findfs(8), lsblk(8), wipefs(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # blkid\n\n> Lists all recognized partitions and their Universally Unique Identifier (UUID).\n> More information: <https://manned.org/blkid>.\n\n- List all partitions:\n\n`sudo blkid`\n\n- List all partitions in a table, including current mountpoints:\n\n`sudo blkid -o list`\n |
btrfs | btrfs(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training btrfs(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | COMMAND SYNTAX | COMMANDS | STANDALONE TOOLS | EXIT STATUS | AVAILABILITY | SEE ALSO | COLOPHON BTRFS(8) Btrfs Manual BTRFS(8) NAME top btrfs - a toolbox to manage btrfs filesystems SYNOPSIS top btrfs <command> [<args>] DESCRIPTION top The btrfs utility is a toolbox for managing btrfs filesystems. There are command groups to work with subvolumes, devices, for whole filesystem or other specific actions. See section COMMANDS. There are also standalone tools for some tasks like btrfs-convert or btrfstune that were separate historically and/or havent been merged to the main utility. See section STANDALONE TOOLS for more details. For other topics (mount options, etc) please refer to the separate manual page btrfs(5). COMMAND SYNTAX top Any command name can be shortened so long as the shortened form is unambiguous, however, it is recommended to use full command names in scripts. All command groups have their manual page named btrfs-<group>. For example: it is possible to run btrfs sub snaps instead of btrfs subvolume snapshot. But btrfs file s is not allowed, because file s may be interpreted both as filesystem show and as filesystem sync. If the command name is ambiguous, the list of conflicting options is printed. For an overview of a given command use btrfs command --help or btrfs [command...] --help --full to print all available options. COMMANDS top balance Balance btrfs filesystem chunks across single or several devices. See btrfs-balance(8) for details. check Do off-line check on a btrfs filesystem. See btrfs-check(8) for details. device Manage devices managed by btrfs, including add/delete/scan and so on. See btrfs-device(8) for details. filesystem Manage a btrfs filesystem, including label setting/sync and so on. See btrfs-filesystem(8) for details. inspect-internal Debug tools for developers/hackers. See btrfs-inspect-internal(8) for details. property Get/set a property from/to a btrfs object. See btrfs-property(8) for details. qgroup Manage quota group(qgroup) for btrfs filesystem. See btrfs-qgroup(8) for details. quota Manage quota on btrfs filesystem like enabling/rescan and etc. See btrfs-quota(8) and btrfs-qgroup(8) for details. receive Receive subvolume data from stdin/file for restore and etc. See btrfs-receive(8) for details. replace Replace btrfs devices. See btrfs-replace(8) for details. rescue Try to rescue damaged btrfs filesystem. See btrfs-rescue(8) for details. restore Try to restore files from a damaged btrfs filesystem. See btrfs-restore(8) for details. scrub Scrub a btrfs filesystem. See btrfs-scrub(8) for details. send Send subvolume data to stdout/file for backup and etc. See btrfs-send(8) for details. subvolume Create/delete/list/manage btrfs subvolume. See btrfs-subvolume(8) for details. STANDALONE TOOLS top New functionality could be provided using a standalone tool. If the functionality proves to be useful, then the standalone tool is declared obsolete and its functionality is copied to the main tool. Obsolete tools are removed after a long (years) depreciation period. Tools that are still in active use without an equivalent in btrfs: btrfs-convert in-place conversion from ext2/3/4 filesystems to btrfs btrfstune tweak some filesystem properties on a unmounted filesystem btrfs-select-super rescue tool to overwrite primary superblock from a spare copy btrfs-find-root rescue helper to find tree roots in a filesystem Deprecated and obsolete tools: btrfs-debug-tree moved to btrfs inspect-internal dump-tree. Removed from source distribution. btrfs-show-super moved to btrfs inspect-internal dump-super, standalone removed. btrfs-zero-log moved to btrfs rescue zero-log, standalone removed. For space-constrained environments, its possible to build a single binary with functionality of several standalone tools. This is following the concept of busybox where the file name selects the functionality. This works for symlinks or hardlinks. The full list can be obtained by btrfs help --box. EXIT STATUS top btrfs returns a zero exit status if it succeeds. Non zero is returned in case of failure. AVAILABILITY top btrfs is part of btrfs-progs. Please refer to the btrfs wiki http://btrfs.wiki.kernel.org for further details. SEE ALSO top btrfs(5), btrfs-balance(8), btrfs-check(8), btrfs-convert(8), btrfs-device(8), btrfs-filesystem(8), btrfs-inspect-internal(8), btrfs-property(8), btrfs-qgroup(8), btrfs-quota(8), btrfs-receive(8), btrfs-replace(8), btrfs-rescue(8), btrfs-restore(8), btrfs-scrub(8), btrfs-send(8), btrfs-subvolume(8), btrfstune(8), mkfs.btrfs(8) COLOPHON top This page is part of the btrfs-progs (btrfs filesystem tools) project. Information about the project can be found at https://btrfs.wiki.kernel.org/index.php/Btrfs_source_repositories. If you have a bug report for this manual page, see https://btrfs.wiki.kernel.org/index.php/Problem_FAQ#How_do_I_report_bugs_and_issues.3F. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/linux/kernel/git/kdave/btrfs-progs.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-14.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org Btrfs v5.16.1 02/06/2022 BTRFS(8) Pages that refer to this page: systemd-nspawn(1), org.freedesktop.import1(5), fsck.btrfs(8), mkfs.btrfs(8), systemd-gpt-auto-generator(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # btrfs\n\n> A filesystem based on the copy-on-write (COW) principle for Linux.\n> Some subcommands such as `btrfs device` have their own usage documentation.\n> More information: <https://btrfs.readthedocs.io/en/latest/btrfs.html>.\n\n- Create subvolume:\n\n`sudo btrfs subvolume create {{path/to/subvolume}}`\n\n- List subvolumes:\n\n`sudo btrfs subvolume list {{path/to/mount_point}}`\n\n- Show space usage information:\n\n`sudo btrfs filesystem df {{path/to/mount_point}}`\n\n- Enable quota:\n\n`sudo btrfs quota enable {{path/to/subvolume}}`\n\n- Show quota:\n\n`sudo btrfs qgroup show {{path/to/subvolume}}`\n |
btrfs-balance | btrfs-balance(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training btrfs-balance(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | COMPATIBILITY | PERFORMANCE IMPLICATIONS | SUBCOMMAND | FILTERS | ENOSPC | EXAMPLES | EXIT STATUS | AVAILABILITY | SEE ALSO | COLOPHON BTRFS-BALANCE(8) Btrfs Manual BTRFS-BALANCE(8) NAME top btrfs-balance - balance block groups on a btrfs filesystem SYNOPSIS top btrfs balance <subcommand> <args> DESCRIPTION top The primary purpose of the balance feature is to spread block groups across all devices so they match constraints defined by the respective profiles. See mkfs.btrfs(8) section PROFILES for more details. The scope of the balancing process can be further tuned by use of filters that can select the block groups to process. Balance works only on a mounted filesystem. Extent sharing is preserved and reflinks are not broken. Files are not defragmented nor recompressed, file extents are preserved but the physical location on devices will change. The balance operation is cancellable by the user. The on-disk state of the filesystem is always consistent so an unexpected interruption (eg. system crash, reboot) does not corrupt the filesystem. The progress of the balance operation is temporarily stored as an internal state and will be resumed upon mount, unless the mount option skip_balance is specified. Warning running balance without filters will take a lot of time as it basically move data/metadata from the whol filesystem and needs to update all block pointers. The filters can be used to perform following actions: convert block group profiles (filter convert) make block group usage more compact (filter usage) perform actions only on a given device (filters devid, drange) The filters can be applied to a combination of block group types (data, metadata, system). Note that changing only the system type needs the force option. Otherwise system gets automatically converted whenever metadata profile is converted. When metadata redundancy is reduced (eg. from RAID1 to single) the force option is also required and it is noted in system log. Note the balance operation needs enough work space, ie. space that is completely unused in the filesystem, otherwise this may lead to ENOSPC reports. See the section ENOSPC for more details. COMPATIBILITY top Note The balance subcommand also exists under the btrfs filesystem namespace. This still works for backward compatibility but is deprecated and should not be used any more. Note A short syntax btrfs balance <path> works due to backward compatibility but is deprecated and should not be used any more. Use btrfs balance start command instead. PERFORMANCE IMPLICATIONS top Balancing operations are very IO intensive and can also be quite CPU intensive, impacting other ongoing filesystem operations. Typically large amounts of data are copied from one location to another, with corresponding metadata updates. Depending upon the block group layout, it can also be seek heavy. Performance on rotational devices is noticeably worse compared to SSDs or fast arrays. SUBCOMMAND top cancel <path> cancels a running or paused balance, the command will block and wait until the current blockgroup being processed completes Since kernel 5.7 the response time of the cancellation is significantly improved, on older kernels it might take a long time until currently processed chunk is completely finished. pause <path> pause running balance operation, this will store the state of the balance progress and used filters to the filesystem resume <path> resume interrupted balance, the balance status must be stored on the filesystem from previous run, eg. after it was paused or forcibly interrupted and mounted again with skip_balance start [options] <path> start the balance operation according to the specified filters, without any filters the data and metadata from the whole filesystem are moved. The process runs in the foreground. Note the balance command without filters will basically move everything in the filesystem to a new physical location on devices (ie. it does not affect the logical properties of file extents like offsets within files and extent sharing). The run time is potentially very long, depending on the filesystem size. To prevent starting a full balance by accident, the user is warned and has a few seconds to cancel the operation before it starts. The warning and delay can be skipped with --full-balance option. Please note that the filters must be written together with the -d, -m and -s options, because theyre optional and bare -d and -m also work and mean no filters. Note when the target profile for conversion filter is raid5 or raid6, theres a safety timeout of 10 seconds to warn users about the status of the feature Options -d[<filters>] act on data block groups, see FILTERS section for details about filters -m[<filters>] act on metadata chunks, see FILTERS section for details about filters -s[<filters>] act on system chunks (requires -f), see FILTERS section for details about filters. -f force a reduction of metadata integrity, eg. when going from raid1 to single, or skip safety timeout when the target conversion profile is raid5 or raid6 --background|--bg run the balance operation asynchronously in the background, uses fork(2) to start the process that calls the kernel ioctl --enqueue wait if theres another exclusive operation running, otherwise continue -v (deprecated) alias for global -v option status [-v] <path> Show status of running or paused balance. Options -v (deprecated) alias for global -v option FILTERS top From kernel 3.3 onwards, btrfs balance can limit its action to a subset of the whole filesystem, and can be used to change the replication configuration (e.g. moving data from single to RAID1). This functionality is accessed through the -d, -m or -s options to btrfs balance start, which filter on data, metadata and system blocks respectively. A filter has the following structure: type[=params][,type=...] The available types are: profiles=<profiles> Balances only block groups with the given profiles. Parameters are a list of profile names separated by "|" (pipe). usage=<percent>, usage=<range> Balances only block groups with usage under the given percentage. The value of 0 is allowed and will clean up completely unused block groups, this should not require any new work space allocated. You may want to use usage=0 in case balance is returning ENOSPC and your filesystem is not too full. The argument may be a single value or a range. The single value N means at most N percent used, equivalent to ..N range syntax. Kernels prior to 4.4 accept only the single value format. The minimum range boundary is inclusive, maximum is exclusive. devid=<id> Balances only block groups which have at least one chunk on the given device. To list devices with ids use btrfs filesystem show. drange=<range> Balance only block groups which overlap with the given byte range on any device. Use in conjunction with devid to filter on a specific device. The parameter is a range specified as start..end. vrange=<range> Balance only block groups which overlap with the given byte range in the filesystems internal virtual address space. This is the address space that most reports from btrfs in the kernel log use. The parameter is a range specified as start..end. convert=<profile> Convert each selected block group to the given profile name identified by parameters. Note starting with kernel 4.5, the data chunks can be converted to/from the DUP profile on a single device. Note starting with kernel 4.6, all profiles can be converted to/from DUP on multi-device filesystems. limit=<number>, limit=<range> Process only given number of chunks, after all filters are applied. This can be used to specifically target a chunk in connection with other filters (drange, vrange) or just simply limit the amount of work done by a single balance run. The argument may be a single value or a range. The single value N means at most N chunks, equivalent to ..N range syntax. Kernels prior to 4.4 accept only the single value format. The range minimum and maximum are inclusive. stripes=<range> Balance only block groups which have the given number of stripes. The parameter is a range specified as start..end. Makes sense for block group profiles that utilize striping, ie. RAID0/10/5/6. The range minimum and maximum are inclusive. soft Takes no parameters. Only has meaning when converting between profiles. When doing convert from one profile to another and soft mode is on, chunks that already have the target profile are left untouched. This is useful e.g. when half of the filesystem was converted earlier but got cancelled. The soft mode switch is (like every other filter) per-type. For example, this means that we can convert metadata chunks the "hard" way while converting data chunks selectively with soft switch. Profile names, used in profiles and convert are one of: raid0, raid1, raid1c3, raid1c4, raid10, raid5, raid6, dup, single. The mixed data/metadata profiles can be converted in the same way, but its conversion between mixed and non-mixed is not implemented. For the constraints of the profiles please refer to mkfs.btrfs(8), section PROFILES. ENOSPC top The way balance operates, it usually needs to temporarily create a new block group and move the old data there, before the old block group can be removed. For that it needs the work space, otherwise it fails for ENOSPC reasons. This is not the same ENOSPC as if the free space is exhausted. This refers to the space on the level of block groups, which are bigger parts of the filesystem that contain many file extents. The free work space can be calculated from the output of the btrfs filesystem show command: Label: 'BTRFS' uuid: 8a9d72cd-ead3-469d-b371-9c7203276265 Total devices 2 FS bytes used 77.03GiB devid 1 size 53.90GiB used 51.90GiB path /dev/sdc2 devid 2 size 53.90GiB used 51.90GiB path /dev/sde1 size - used = free work space 53.90GiB - 51.90GiB = 2.00GiB An example of a filter that does not require workspace is usage=0. This will scan through all unused block groups of a given type and will reclaim the space. After that it might be possible to run other filters. CONVERSIONS ON MULTIPLE DEVICES Conversion to profiles based on striping (RAID0, RAID5/6) require the work space on each device. An interrupted balance may leave partially filled block groups that consume the work space. EXAMPLES top A more comprehensive example when going from one to multiple devices, and back, can be found in section TYPICAL USECASES of btrfs-device(8). MAKING BLOCK GROUP LAYOUT MORE COMPACT The layout of block groups is not normally visible; most tools report only summarized numbers of free or used space, but there are still some hints provided. Lets use the following real life example and start with the output: $ btrfs filesystem df /path Data, single: total=75.81GiB, used=64.44GiB System, RAID1: total=32.00MiB, used=20.00KiB Metadata, RAID1: total=15.87GiB, used=8.84GiB GlobalReserve, single: total=512.00MiB, used=0.00B Roughly calculating for data, 75G - 64G = 11G, the used/total ratio is about 85%. How can we can interpret that: chunks are filled by 85% on average, ie. the usage filter with anything smaller than 85 will likely not affect anything in a more realistic scenario, the space is distributed unevenly, we can assume there are completely used chunks and the remaining are partially filled Compacting the layout could be used on both. In the former case it would spread data of a given chunk to the others and removing it. Here we can estimate that roughly 850 MiB of data have to be moved (85% of a 1 GiB chunk). In the latter case, targeting the partially used chunks will have to move less data and thus will be faster. A typical filter command would look like: # btrfs balance start -dusage=50 /path Done, had to relocate 2 out of 97 chunks $ btrfs filesystem df /path Data, single: total=74.03GiB, used=64.43GiB System, RAID1: total=32.00MiB, used=20.00KiB Metadata, RAID1: total=15.87GiB, used=8.84GiB GlobalReserve, single: total=512.00MiB, used=0.00B As you can see, the total amount of data is decreased by just 1 GiB, which is an expected result. Lets see what will happen when we increase the estimated usage filter. # btrfs balance start -dusage=85 /path Done, had to relocate 13 out of 95 chunks $ btrfs filesystem df /path Data, single: total=68.03GiB, used=64.43GiB System, RAID1: total=32.00MiB, used=20.00KiB Metadata, RAID1: total=15.87GiB, used=8.85GiB GlobalReserve, single: total=512.00MiB, used=0.00B Now the used/total ratio is about 94% and we moved about 74G - 68G = 6G of data to the remaining blockgroups, ie. the 6GiB are now free of filesystem structures, and can be reused for new data or metadata block groups. We can do a similar exercise with the metadata block groups, but this should not typically be necessary, unless the used/total ratio is really off. Here the ratio is roughly 50% but the difference as an absolute number is "a few gigabytes", which can be considered normal for a workload with snapshots or reflinks updated frequently. # btrfs balance start -musage=50 /path Done, had to relocate 4 out of 89 chunks $ btrfs filesystem df /path Data, single: total=68.03GiB, used=64.43GiB System, RAID1: total=32.00MiB, used=20.00KiB Metadata, RAID1: total=14.87GiB, used=8.85GiB GlobalReserve, single: total=512.00MiB, used=0.00B Just 1 GiB decrease, which possibly means there are block groups with good utilization. Making the metadata layout more compact would in turn require updating more metadata structures, ie. lots of IO. As running out of metadata space is a more severe problem, its not necessary to keep the utilization ratio too high. For the purpose of this example, lets see the effects of further compaction: # btrfs balance start -musage=70 /path Done, had to relocate 13 out of 88 chunks $ btrfs filesystem df . Data, single: total=68.03GiB, used=64.43GiB System, RAID1: total=32.00MiB, used=20.00KiB Metadata, RAID1: total=11.97GiB, used=8.83GiB GlobalReserve, single: total=512.00MiB, used=0.00B GETTING RID OF COMPLETELY UNUSED BLOCK GROUPS Normally the balance operation needs a work space, to temporarily move the data before the old block groups gets removed. If theres no work space, it ends with no space left. Theres a special case when the block groups are completely unused, possibly left after removing lots of files or deleting snapshots. Removing empty block groups is automatic since 3.18. The same can be achieved manually with a notable exception that this operation does not require the work space. Thus it can be used to reclaim unused block groups to make it available. # btrfs balance start -dusage=0 /path This should lead to decrease in the total numbers in the btrfs filesystem df output. EXIT STATUS top Unless indicated otherwise below, all btrfs balance subcommands return a zero exit status if they succeed, and non zero in case of failure. The pause, cancel, and resume subcommands exit with a status of 2 if they fail because a balance operation was not running. The status subcommand exits with a status of 0 if a balance operation is not running, 1 if the command-line usage is incorrect or a balance operation is still running, and 2 on other errors. AVAILABILITY top btrfs is part of btrfs-progs. Please refer to the btrfs wiki http://btrfs.wiki.kernel.org for further details. SEE ALSO top mkfs.btrfs(8), btrfs-device(8) COLOPHON top This page is part of the btrfs-progs (btrfs filesystem tools) project. Information about the project can be found at https://btrfs.wiki.kernel.org/index.php/Btrfs_source_repositories. If you have a bug report for this manual page, see https://btrfs.wiki.kernel.org/index.php/Problem_FAQ#How_do_I_report_bugs_and_issues.3F. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/linux/kernel/git/kdave/btrfs-progs.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-14.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org Btrfs v5.16.1 02/06/2022 BTRFS-BALANCE(8) Pages that refer to this page: btrfs(8), btrfs-convert(8), btrfs-device(8), btrfstune(8), mkfs.btrfs(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # btrfs balance\n\n> Balance block groups on a btrfs filesystem.\n> More information: <https://btrfs.readthedocs.io/en/latest/btrfs-balance.html>.\n\n- Show the status of a running or paused balance operation:\n\n`sudo btrfs balance status {{path/to/btrfs_filesystem}}`\n\n- Balance all block groups (slow; rewrites all blocks in filesystem):\n\n`sudo btrfs balance start {{path/to/btrfs_filesystem}}`\n\n- Balance data block groups which are less than 15% utilized, running the operation in the background:\n\n`sudo btrfs balance start --bg -dusage={{15}} {{path/to/btrfs_filesystem}}`\n\n- Balance a max of 10 metadata chunks with less than 20% utilization and at least 1 chunk on a given device `devid` (see `btrfs filesystem show`):\n\n`sudo btrfs balance start -musage={{20}},limit={{10}},devid={{devid}} {{path/to/btrfs_filesystem}}`\n\n- Convert data blocks to the raid6 and metadata to raid1c3 (see mkfs.btrfs(8) for profiles):\n\n`sudo btrfs balance start -dconvert={{raid6}} -mconvert={{raid1c3}} {{path/to/btrfs_filesystem}}`\n\n- Convert data blocks to raid1, skipping already converted chunks (e.g. after a previous cancelled conversion operation):\n\n`sudo btrfs balance start -dconvert={{raid1}},soft {{path/to/btrfs_filesystem}}`\n\n- Cancel, pause, or resume a running or paused balance operation:\n\n`sudo btrfs balance {{cancel|pause|resume}} {{path/to/btrfs_filesystem}}`\n |
btrfs-check | btrfs-check(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training btrfs-check(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | SAFE OR ADVISORY OPTIONS | DANGEROUS OPTIONS | EXIT STATUS | AVAILABILITY | SEE ALSO | COLOPHON BTRFS-CHECK(8) Btrfs Manual BTRFS-CHECK(8) NAME top btrfs-check - check or repair a btrfs filesystem SYNOPSIS top btrfs check [options] <device> DESCRIPTION top The filesystem checker is used to verify structural integrity of a filesystem and attempt to repair it if requested. It is recommended to unmount the filesystem prior to running the check, but it is possible to start checking a mounted filesystem (see --force). By default, btrfs check will not modify the device but you can reaffirm that by the option --readonly. btrfsck is an alias of btrfs check command and is now deprecated. Warning Do not use --repair unless you are advised to do so by a developer or an experienced user, and then only after having accepted that no fsck successfully repair all types of filesystem corruption. Eg. some other software or hardware bugs can fatally damage a volume. The structural integrity check verifies if internal filesystem objects or data structures satisfy the constraints, point to the right objects or are correctly connected together. There are several cross checks that can detect wrong reference counts of shared extents, backreferences, missing extents of inodes, directory and inode connectivity etc. The amount of memory required can be high, depending on the size of the filesystem, similarly the run time. Check the modes that can also affect that. SAFE OR ADVISORY OPTIONS top -b|--backup use the first valid set of backup roots stored in the superblock This can be combined with --super if some of the superblocks are damaged. --check-data-csum verify checksums of data blocks This expects that the filesystem is otherwise OK, and is basically an offline scrub that does not repair data from spare copies. --chunk-root <bytenr> use the given offset bytenr for the chunk tree root -E|--subvol-extents <subvolid> show extent state for the given subvolume -p|--progress indicate progress at various checking phases -Q|--qgroup-report verify qgroup accounting and compare against filesystem accounting -r|--tree-root <bytenr> use the given offset bytenr for the tree root --readonly (default) run in read-only mode, this option exists to calm potential panic when users are going to run the checker -s|--super <superblock> use 'superblockth superblock copy, valid values are 0, 1 or 2 if the respective superblock offset is within the device size This can be used to use a different starting point if some of the primary superblock is damaged. --clear-space-cache v1|v2 completely wipe all free space cache of given type For free space cache v1, the clear_cache kernel mount option only rebuilds the free space cache for block groups that are modified while the filesystem is mounted with that option. Thus, using this option with v1 makes it possible to actually clear the entire free space cache. For free space cache v2, the clear_cache kernel mount option destroys the entire free space cache. This option, with v2 provides an alternative method of clearing the free space cache that doesnt require mounting the filesystem. --clear-ino-cache remove leftover items pertaining to the deprecated inode map feature DANGEROUS OPTIONS top --repair enable the repair mode and attempt to fix problems where possible Note theres a warning and 10 second delay when this option is run without --force to give users a chance to think twice before running repair, the warnings in documentation have shown to be insufficient --init-csum-tree create a new checksum tree and recalculate checksums in all files Note Do not blindly use this option to fix checksum mismatch problems. --init-extent-tree build the extent tree from scratch Note Do not use unless you know what youre doing. --mode <MODE> select mode of operation regarding memory and IO The MODE can be one of: original The metadata are read into memory and verified, thus the requirements are high on large filesystems and can even lead to out-of-memory conditions. The possible workaround is to export the block device over network to a machine with enough memory. lowmem This mode is supposed to address the high memory consumption at the cost of increased IO when it needs to re-read blocks. This may increase run time. Note lowmem mode does not work with --repair yet, and is still considered experimental. --force allow work on a mounted filesystem. Note that this should work fine on a quiescent or read-only mounted filesystem but may crash if the device is changed externally, eg. by the kernel module. Repair without mount checks is not supported right now. This option also skips the delay and warning in the repair mode (see --repair). EXIT STATUS top btrfs check returns a zero exit status if it succeeds. Non zero is returned in case of failure. AVAILABILITY top btrfs is part of btrfs-progs. Please refer to the btrfs wiki http://btrfs.wiki.kernel.org for further details. SEE ALSO top mkfs.btrfs(8), btrfs-scrub(8), btrfs-rescue(8) COLOPHON top This page is part of the btrfs-progs (btrfs filesystem tools) project. Information about the project can be found at https://btrfs.wiki.kernel.org/index.php/Btrfs_source_repositories. If you have a bug report for this manual page, see https://btrfs.wiki.kernel.org/index.php/Problem_FAQ#How_do_I_report_bugs_and_issues.3F. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/linux/kernel/git/kdave/btrfs-progs.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-14.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org Btrfs v5.16.1 02/06/2022 BTRFS-CHECK(8) Pages that refer to this page: btrfs(8), btrfs-rescue(8), btrfs-restore(8), fsck.btrfs(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # btrfs check\n\n> Check or repair a btrfs filesystem.\n> More information: <https://btrfs.readthedocs.io/en/latest/btrfs-check.html>.\n\n- Check a btrfs filesystem:\n\n`sudo btrfs check {{path/to/partition}}`\n\n- Check and repair a btrfs filesystem (dangerous):\n\n`sudo btrfs check --repair {{path/to/partition}}`\n\n- Show the progress of the check:\n\n`sudo btrfs check --progress {{path/to/partition}}`\n\n- Verify the checksum of each data block (if the filesystem is good):\n\n`sudo btrfs check --check-data-csum {{path/to/partition}}`\n\n- Use the `n`-th superblock (`n` can be 0, 1 or 2):\n\n`sudo btrfs check --super {{n}} {{path/to/partition}}`\n\n- Rebuild the checksum tree:\n\n`sudo btrfs check --repair --init-csum-tree {{path/to/partition}}`\n\n- Rebuild the extent tree:\n\n`sudo btrfs check --repair --init-extent-tree {{path/to/partition}}`\n |
btrfs-device | btrfs-device(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training btrfs-device(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | DEVICE MANAGEMENT | SUBCOMMAND | TYPICAL USECASES | DEVICE STATS | EXIT STATUS | AVAILABILITY | SEE ALSO | COLOPHON BTRFS-DEVICE(8) Btrfs Manual BTRFS-DEVICE(8) NAME top btrfs-device - manage devices of btrfs filesystems SYNOPSIS top btrfs device <subcommand> <args> DESCRIPTION top The btrfs device command group is used to manage devices of the btrfs filesystems. DEVICE MANAGEMENT top Btrfs filesystem can be created on top of single or multiple block devices. Data and metadata are organized in allocation profiles with various redundancy policies. Theres some similarity with traditional RAID levels, but this could be confusing to users familiar with the traditional meaning. Due to the similarity, the RAID terminology is widely used in the documentation. See mkfs.btrfs(8) for more details and the exact profile capabilities and constraints. The device management works on a mounted filesystem. Devices can be added, removed or replaced, by commands provided by btrfs device and btrfs replace. The profiles can be also changed, provided theres enough workspace to do the conversion, using the btrfs balance command and namely the filter convert. Type The block group profile type is the main distinction of the information stored on the block device. User data are called Data, the internal data structures managed by filesystem are Metadata and System. Profile A profile describes an allocation policy based on the redundancy/replication constraints in connection with the number of devices. The profile applies to data and metadata block groups separately. Eg. single, RAID1. RAID level Where applicable, the level refers to a profile that matches constraints of the standard RAID levels. At the moment the supported ones are: RAID0, RAID1, RAID10, RAID5 and RAID6. See the section TYPICAL USECASES for some examples. SUBCOMMAND top add [-Kf] <device> [<device>...] <path> Add device(s) to the filesystem identified by <path>. If applicable, a whole device discard (TRIM) operation is performed prior to adding the device. A device with existing filesystem detected by blkid(8) will prevent device addition and has to be forced. Alternatively the filesystem can be wiped from the device using eg. the wipefs(8) tool. The operation is instant and does not affect existing data. The operation merely adds the device to the filesystem structures and creates some block groups headers. Options -K|--nodiscard do not perform discard (TRIM) by default -f|--force force overwrite of existing filesystem on the given disk(s) --enqueue wait if theres another exclusive operation running, otherwise continue remove [options] <device>|<devid> [<device>|<devid>...] <path> Remove device(s) from a filesystem identified by <path> Device removal must satisfy the profile constraints, otherwise the command fails. The filesystem must be converted to profile(s) that would allow the removal. This can typically happen when going down from 2 devices to 1 and using the RAID1 profile. See the TYPICAL USECASES section below. The operation can take long as it needs to move all data from the device. It is possible to delete the device that was used to mount the filesystem. The device entry in the mount table will be replaced by another device name with the lowest device id. If the filesystem is mounted in degraded mode (-o degraded), special term missing can be used for device. In that case, the first device that is described by the filesystem metadata, but not present at the mount time will be removed. Note In most cases, there is only one missing device in degraded mode, otherwise mount fails. If there are two or more devices missing (e.g. possible in RAID6), you need specify missing as many times as the number of missing devices to remove all of them. Options --enqueue wait if theres another exclusive operation running, otherwise continue delete <device>|<devid> [<device>|<devid>...] <path> Alias of remove kept for backward compatibility ready <device> Wait until all devices of a multiple-device filesystem are scanned and registered within the kernel module. This is to provide a way for automatic filesystem mounting tools to wait before the mount can start. The device scan is only one of the preconditions and the mount can fail for other reasons. Normal users usually do not need this command and may safely ignore it. scan [options] [<device> [<device>...]] Scan devices for a btrfs filesystem and register them with the kernel module. This allows mounting multiple-device filesystem by specifying just one from the whole group. If no devices are passed, all block devices that blkid reports to contain btrfs are scanned. The options --all-devices or -d can be used as a fallback in case blkid is not available. If used, behavior is the same as if no devices are passed. The command can be run repeatedly. Devices that have been already registered remain as such. Reloading the kernel module will drop this information. Theres an alternative way of mounting multiple-device filesystem without the need for prior scanning. See the mount option device. Options -d|--all-devices Enumerate and register all devices, use as a fallback in case blkid is not available. -u|--forget Unregister a given device or all stale devices if no path is given, the device must be unmounted otherwise its an error. stats [options] <path>|<device> Read and print the device IO error statistics for all devices of the given filesystem identified by <path> or for a single <device>. The filesystem must be mounted. See section DEVICE STATS for more information about the reported statistics and the meaning. Options -z|--reset Print the stats and reset the values to zero afterwards. -c|--check Check if the stats are all zeros and return 0 if it is so. Set bit 6 of the return code if any of the statistics is no-zero. The error values is 65 if reading stats from at least one device failed, otherwise its 64. usage [options] <path> [<path>...] Show detailed information about internal allocations on devices. The level of detail can differ if the command is run under a regular or the root user (due to use of restricted ioctls). The first example below is for normal user (warning included) and the next one with root on the same filesystem: WARNING: cannot read detailed chunk info, per-device usage will not be shown, run as root /dev/sdc1, ID: 1 Device size: 931.51GiB Device slack: 0.00B Unallocated: 931.51GiB /dev/sdc1, ID: 1 Device size: 931.51GiB Device slack: 0.00B Data,single: 641.00GiB Data,RAID0/3: 1.00GiB Metadata,single: 19.00GiB System,single: 32.00MiB Unallocated: 271.48GiB Device size size of the device as seen by the filesystem (may be different than actual device size) Device slack portion of device not used by the filesystem but still available in the physical space provided by the device, eg. after a device shrink Data,single, Metadata,single, System,single in general, list of block group type (Data, Metadata, System) and profile (single, RAID1, ...) allocated on the device Data,RAID0/3 in particular, striped profiles RAID0/RAID10/RAID5/RAID6 with the number of devices on which the stripes are allocated, multiple occurrences of the same profile can appear in case a new device has been added and all new available stripes have been used for writes Unallocated remaining space that the filesystem can still use for new block groups Options -b|--raw raw numbers in bytes, without the B suffix -h|--human-readable print human friendly numbers, base 1024, this is the default -H print human friendly numbers, base 1000 --iec select the 1024 base for the following options, according to the IEC standard --si select the 1000 base for the following options, according to the SI standard -k|--kbytes show sizes in KiB, or kB with --si -m|--mbytes show sizes in MiB, or MB with --si -g|--gbytes show sizes in GiB, or GB with --si -t|--tbytes show sizes in TiB, or TB with --si If conflicting options are passed, the last one takes precedence. TYPICAL USECASES top STARTING WITH A SINGLE-DEVICE FILESYSTEM Assume weve created a filesystem on a block device /dev/sda with profile single/single (data/metadata), the device size is 50GiB and weve used the whole device for the filesystem. The mount point is /mnt. The amount of data stored is 16GiB, metadata have allocated 2GiB. ADD NEW DEVICE We want to increase the total size of the filesystem and keep the profiles. The size of the new device /dev/sdb is 100GiB. $ btrfs device add /dev/sdb /mnt The amount of free data space increases by less than 100GiB, some space is allocated for metadata. CONVERT TO RAID1 Now we want to increase the redundancy level of both data and metadata, but well do that in steps. Note, that the device sizes are not equal and well use that to show the capabilities of split data/metadata and independent profiles. The constraint for RAID1 gives us at most 50GiB of usable space and exactly 2 copies will be stored on the devices. First well convert the metadata. As the metadata occupy less than 50GiB and theres enough workspace for the conversion process, we can do: $ btrfs balance start -mconvert=raid1 /mnt This operation can take a while, because all metadata have to be moved and all block pointers updated. Depending on the physical locations of the old and new blocks, the disk seeking is the key factor affecting performance. Youll note that the system block group has been also converted to RAID1, this normally happens as the system block group also holds metadata (the physical to logical mappings). What changed: available data space decreased by 3GiB, usable roughly (50 - 3) + (100 - 3) = 144 GiB metadata redundancy increased IOW, the unequal device sizes allow for combined space for data yet improved redundancy for metadata. If we decide to increase redundancy of data as well, were going to lose 50GiB of the second device for obvious reasons. $ btrfs balance start -dconvert=raid1 /mnt The balance process needs some workspace (ie. a free device space without any data or metadata block groups) so the command could fail if theres too much data or the block groups occupy the whole first device. The device size of /dev/sdb as seen by the filesystem remains unchanged, but the logical space from 50-100GiB will be unused. REMOVE DEVICE Device removal must satisfy the profile constraints, otherwise the command fails. For example: $ btrfs device remove /dev/sda /mnt ERROR: error removing device '/dev/sda': unable to go below two devices on raid1 In order to remove a device, you need to convert the profile in this case: $ btrfs balance start -mconvert=dup -dconvert=single /mnt $ btrfs device remove /dev/sda /mnt DEVICE STATS top The device stats keep persistent record of several error classes related to doing IO. The current values are printed at mount time and updated during filesystem lifetime or from a scrub run. $ btrfs device stats /dev/sda3 [/dev/sda3].write_io_errs 0 [/dev/sda3].read_io_errs 0 [/dev/sda3].flush_io_errs 0 [/dev/sda3].corruption_errs 0 [/dev/sda3].generation_errs 0 write_io_errs Failed writes to the block devices, means that the layers beneath the filesystem were not able to satisfy the write request. read_io_errors Read request analogy to write_io_errs. flush_io_errs Number of failed writes with the FLUSH flag set. The flushing is a method of forcing a particular order between write requests and is crucial for implementing crash consistency. In case of btrfs, all the metadata blocks must be permanently stored on the block device before the superblock is written. corruption_errs A block checksum mismatched or a corrupted metadata header was found. generation_errs The block generation does not match the expected value (eg. stored in the parent node). Since kernel 5.14 the device stats are also available in textual form in /sys/fs/btrfs/FSID/devinfo/DEVID/error_stats. EXIT STATUS top btrfs device returns a zero exit status if it succeeds. Non zero is returned in case of failure. If the -s option is used, btrfs device stats will add 64 to the exit status if any of the error counters is non-zero. AVAILABILITY top btrfs is part of btrfs-progs. Please refer to the btrfs wiki http://btrfs.wiki.kernel.org for further details. SEE ALSO top mkfs.btrfs(8), btrfs-replace(8), btrfs-balance(8) COLOPHON top This page is part of the btrfs-progs (btrfs filesystem tools) project. Information about the project can be found at https://btrfs.wiki.kernel.org/index.php/Btrfs_source_repositories. If you have a bug report for this manual page, see https://btrfs.wiki.kernel.org/index.php/Problem_FAQ#How_do_I_report_bugs_and_issues.3F. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/linux/kernel/git/kdave/btrfs-progs.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-14.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org Btrfs v5.16.1 02/06/2022 BTRFS-DEVICE(8) Pages that refer to this page: btrfs(8), btrfs-balance(8), btrfs-replace(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # btrfs device\n\n> Manage devices in a btrfs filesystem.\n> More information: <https://btrfs.readthedocs.io/en/latest/btrfs-device.html>.\n\n- Add one or more devices to a btrfs filesystem:\n\n`sudo btrfs device add {{path/to/block_device1}} [{{path/to/block_device2}}] {{path/to/btrfs_filesystem}}`\n\n- Remove a device from a btrfs filesystem:\n\n`sudo btrfs device remove {{path/to/device|device_id}} [{{...}}]`\n\n- Display error statistics:\n\n`sudo btrfs device stats {{path/to/btrfs_filesystem}}`\n\n- Scan all disks and inform the kernel of all detected btrfs filesystems:\n\n`sudo btrfs device scan --all-devices`\n\n- Display detailed per-disk allocation statistics:\n\n`sudo btrfs device usage {{path/to/btrfs_filesystem}}`\n |
btrfs-filesystem | btrfs-filesystem(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training btrfs-filesystem(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | SUBCOMMAND | EXAMPLES | EXIT STATUS | AVAILABILITY | SEE ALSO | COLOPHON BTRFS-FILESYSTEM(8) Btrfs Manual BTRFS-FILESYSTEM(8) NAME top btrfs-filesystem - command group that primarily does work on the whole filesystems SYNOPSIS top btrfs filesystem <subcommand> <args> DESCRIPTION top btrfs filesystem is used to perform several whole filesystem level tasks, including all the regular filesystem operations like resizing, space stats, label setting/getting, and defragmentation. There are other whole filesystem tasks like scrub or balance that are grouped in separate commands. SUBCOMMAND top df [options] <path> Show a terse summary information about allocation of block group types of a given mount point. The original purpose of this command was a debugging helper. The output needs to be further interpreted and is not suitable for quick overview. An example with description: device size: 1.9TiB, one device, no RAID filesystem size: 1.9TiB created with: mkfs.btrfs -d single -m single $ btrfs filesystem df /path Data, single: total=1.15TiB, used=1.13TiB System, single: total=32.00MiB, used=144.00KiB Metadata, single: total=12.00GiB, used=6.45GiB GlobalReserve, single: total=512.00MiB, used=0.00B Data, System and Metadata are separate block group types. GlobalReserve is an artificial and internal emergency space, see below. single the allocation profile, defined at mkfs time total sum of space reserved for all allocation profiles of the given type, ie. all Data/single. Note that its not total size of filesystem. used sum of used space of the above, ie. file extents, metadata blocks GlobalReserve is an artificial and internal emergency space. It is used eg. when the filesystem is full. Its total size is dynamic based on the filesystem size, usually not larger than 512MiB, used may fluctuate. The GlobalReserve is a portion of Metadata. In case the filesystem metadata is exhausted, GlobalReserve/total + Metadata/used = Metadata/total. Otherwise there appears to be some unused space of Metadata. Options -b|--raw raw numbers in bytes, without the B suffix -h|--human-readable print human friendly numbers, base 1024, this is the default -H print human friendly numbers, base 1000 --iec select the 1024 base for the following options, according to the IEC standard --si select the 1000 base for the following options, according to the SI standard -k|--kbytes show sizes in KiB, or kB with --si -m|--mbytes show sizes in MiB, or MB with --si -g|--gbytes show sizes in GiB, or GB with --si -t|--tbytes show sizes in TiB, or TB with --si If conflicting options are passed, the last one takes precedence. defragment [options] <file>|<dir> [<file>|<dir>...] Defragment file data on a mounted filesystem. Requires kernel 2.6.33 and newer. If -r is passed, files in dir will be defragmented recursively (not descending to subvolumes, mount points and directory symlinks). The start position and the number of bytes to defragment can be specified by start and length using -s and -l options below. Extents bigger than value given by -t will be skipped, otherwise this value is used as a target extent size, but is only advisory and may not be reached if the free space is too fragmented. Use 0 to take the kernel default, which is 256kB but may change in the future. You can also turn on compression in defragment operations. Warning Defragmenting with Linux kernel versions < 3.9 or 3.14-rc2 as well as with Linux stable kernel versions 3.10.31, 3.12.12 or 3.13.4 will break up the reflinks of COW data (for example files copied with cp --reflink, snapshots or de-duplicated data). This may cause considerable increase of space usage depending on the broken up reflinks. Note Directory arguments without -r do not defragment files recursively but will defragment certain internal trees (extent tree and the subvolume tree). This has been confusing and could be removed in the future. For start, len, size it is possible to append units designator: 'K', 'M', 'G', 'T', 'P', or 'E', which represent KiB, MiB, GiB, TiB, PiB, or EiB, respectively (case does not matter). Options -c[<algo>] compress file contents while defragmenting. Optional argument selects the compression algorithm, zlib (default), lzo or zstd. Currently its not possible to select no compression. See also section EXAMPLES. -r defragment files recursively in given directories, does not descend to subvolumes or mount points -f flush data for each file before going to the next file. This will limit the amount of dirty data to current file, otherwise the amount accumulates from several files and will increase system load. This can also lead to ENOSPC if theres too much dirty data to write and its not possible to make the reservations for the new data (ie. how the COW design works). -s <start>[kKmMgGtTpPeE] defragmentation will start from the given offset, default is beginning of a file -l <len>[kKmMgGtTpPeE] defragment only up to len bytes, default is the file size -t <size>[kKmMgGtTpPeE] target extent size, do not touch extents bigger than size, default: 32M The value is only advisory and the final size of the extents may differ, depending on the state of the free space and fragmentation or other internal logic. Reasonable values are from tens to hundreds of megabytes. -v (deprecated) alias for global -v option du [options] <path> [<path>..] Calculate disk usage of the target files using FIEMAP. For individual files, it will report a count of total bytes, and exclusive (not shared) bytes. We also calculate a set shared value which is described below. Each argument to btrfs filesystem du will have a set shared value calculated for it. We define each set as those files found by a recursive search of an argument (recursion descends to subvolumes but not mount points). The set shared value then is a sum of all shared space referenced by the set. set shared takes into account overlapping shared extents, hence it isnt as simple as adding up shared extents. Options -s|--summarize display only a total for each argument --raw raw numbers in bytes, without the B suffix. --human-readable print human friendly numbers, base 1024, this is the default --iec select the 1024 base for the following options, according to the IEC standard. --si select the 1000 base for the following options, according to the SI standard. --kbytes show sizes in KiB, or kB with --si. --mbytes show sizes in MiB, or MB with --si. --gbytes show sizes in GiB, or GB with --si. --tbytes show sizes in TiB, or TB with --si. label [<device>|<mountpoint>] [<newlabel>] Show or update the label of a filesystem. This works on a mounted filesystem or a filesystem image. The newlabel argument is optional. Current label is printed if the argument is omitted. Note the maximum allowable length shall be less than 256 chars and must not contain a newline. The trailing newline is stripped automatically. resize [options] [<devid>:][+/-]<size>[kKmMgGtTpPeE]|[<devid>:]max <path> Resize a mounted filesystem identified by path. A particular device can be resized by specifying a devid. Warning If path is a file containing a BTRFS image then resize does not work as expected and does not resize the image. This would resize the underlying filesystem instead. The devid can be found in the output of btrfs filesystem show and defaults to 1 if not specified. The size parameter specifies the new size of the filesystem. If the prefix + or - is present the size is increased or decreased by the quantity size. If no units are specified, bytes are assumed for size. Optionally, the size parameter may be suffixed by one of the following unit designators: 'K', 'M', 'G', 'T', 'P', or 'E', which represent KiB, MiB, GiB, TiB, PiB, or EiB, respectively (case does not matter). If max is passed, the filesystem will occupy all available space on the device respecting devid (remember, devid 1 by default). The resize command does not manipulate the size of underlying partition. If you wish to enlarge/reduce a filesystem, you must make sure you can expand the partition before enlarging the filesystem and shrink the partition after reducing the size of the filesystem. This can done using fdisk(8) or parted(8) to delete the existing partition and recreate it with the new desired size. When recreating the partition make sure to use the same starting partition offset as before. Growing is usually instant as it only updates the size. However, shrinking could take a long time if there are data in the device area thats beyond the new end. Relocation of the data takes time. See also section EXAMPLES. Options --enqueue wait if theres another exclusive operation running, otherwise continue show [options] [<path>|<uuid>|<device>|<label>] Show the btrfs filesystem with some additional info about devices and space allocation. If no option none of path/uuid/device/label is passed, information about all the BTRFS filesystems is shown, both mounted and unmounted. Options -m|--mounted probe kernel for mounted BTRFS filesystems -d|--all-devices scan all devices under /dev, otherwise the devices list is extracted from the /proc/partitions file. This is a fallback option if theres no device node manager (like udev) available in the system. --raw raw numbers in bytes, without the B suffix --human-readable print human friendly numbers, base 1024, this is the default --iec select the 1024 base for the following options, according to the IEC standard --si select the 1000 base for the following options, according to the SI standard --kbytes show sizes in KiB, or kB with --si --mbytes show sizes in MiB, or MB with --si --gbytes show sizes in GiB, or GB with --si --tbytes show sizes in TiB, or TB with --si sync <path> Force a sync of the filesystem at path, similar to the sync(1) command. In addition, it starts cleaning of deleted subvolumes. To wait for the subvolume deletion to complete use the btrfs subvolume sync command. usage [options] <path> [<path>...] Show detailed information about internal filesystem usage. This is supposed to replace the btrfs filesystem df command in the long run. The level of detail can differ if the command is run under a regular or the root user (due to use of restricted ioctl). For both theres a summary section with information about space usage: $ btrfs filesystem usage /path WARNING: cannot read detailed chunk info, RAID5/6 numbers will be incorrect, run as root Overall: Device size: 1.82TiB Device allocated: 1.17TiB Device unallocated: 669.99GiB Device missing: 0.00B Used: 1.14TiB Free (estimated): 692.57GiB (min: 692.57GiB) Free (statfs, df) 692.57GiB Data ratio: 1.00 Metadata ratio: 1.00 Global reserve: 512.00MiB (used: 0.00B) Multiple profiles: no Device size sum of raw device capacity available to the filesystem Device allocated sum of total space allocated for data/metadata/system profiles, this also accounts space reserved but not yet used for extents Device unallocated the remaining unallocated space for future allocations (difference of the above two numbers) Device missing sum of capacity of all missing devices Used sum of the used space of data/metadata/system profiles, not including the reserved space Free (estimated) approximate size of the remaining free space usable for data, including currently allocated space and estimating the usage of the unallocated space based on the block group profiles, the min is the lower bound of the estimate in case multiple profiles are present Free (statfs, df) the amount of space available for data as reported by the statfs syscall, also returned as Avail in the output of df. The value is calculated in a different way and may not match the estimate in some cases (eg. multiple profiles). Data ratio ratio of total space for data including redundancy or parity to the effectively usable data space, eg. single is 1.0, RAID1 is 2.0 and for RAID5/6 it depends on the number of devices Metadata ratio dtto, for metadata Global reserve portion of metadata currently used for global block reserve, used for emergency purposes (like deletion on a full filesystem) Multiple profiles what block group types (data, metadata) have more than one profile (single, raid1, ...), see btrfs(5) section FILESYSTEMS WITH MULTIPLE BLOCK GROUP PROFILES. And on a zoned filesystem there are two more lines in the Device section: Device zone unusable: 5.13GiB Device zone size: 256.00MiB Device zone unusable sum of of space thats been used in the past but now is not due to COW and not referenced anymory, the chunks have to be reclaimed and zones reset to make it usable again Device zone size the reported zone size of the host-managed device, same for all devices The root user will also see stats broken down by block group types: Data,single: Size:1.15TiB, Used:1.13TiB (98.26%) /dev/sdb 1.15TiB Metadata,single: Size:12.00GiB, Used:6.45GiB (53.75%) /dev/sdb 12.00GiB System,single: Size:32.00MiB, Used:144.00KiB (0.44%) /dev/sdb 32.00MiB Unallocated: /dev/sdb 669.99GiB Data is block group type, single is block group profile, Size is total size occupied by this type, Used is the actually used space, the percent is ratio of Used/Size. The Unallocated is remaining space. Options -b|--raw raw numbers in bytes, without the B suffix -h|--human-readable print human friendly numbers, base 1024, this is the default -H print human friendly numbers, base 1000 --iec select the 1024 base for the following options, according to the IEC standard --si select the 1000 base for the following options, according to the SI standard -k|--kbytes show sizes in KiB, or kB with --si -m|--mbytes show sizes in MiB, or MB with --si -g|--gbytes show sizes in GiB, or GB with --si -t|--tbytes show sizes in TiB, or TB with --si -T show data in tabular format If conflicting options are passed, the last one takes precedence. EXAMPLES top $ btrfs filesystem defrag -v -r dir/ Recursively defragment files under dir/, print files as they are processed. The file names will be printed in batches, similarly the amount of data triggered by defragmentation will be proportional to last N printed files. The system dirty memory throttling will slow down the defragmentation but there can still be a lot of IO load and the system may stall for a moment. $ btrfs filesystem defrag -v -r -f dir/ Recursively defragment files under dir/, be verbose and wait until all blocks are flushed before processing next file. You can note slower progress of the output and lower IO load (proportional to currently defragmented file). $ btrfs filesystem defrag -v -r -f -clzo dir/ Recursively defragment files under dir/, be verbose, wait until all blocks are flushed and force file compression. $ btrfs filesystem defrag -v -r -t 64M dir/ Recursively defragment files under dir/, be verbose and try to merge extents to be about 64MiB. As stated above, the success rate depends on actual free space fragmentation and the final result is not guaranteed to meet the target even if run repeatedly. $ btrfs filesystem resize -1G /path $ btrfs filesystem resize 1:-1G /path Shrink size of the filesystems device id 1 by 1GiB. The first syntax expects a device with id 1 to exist, otherwise fails. The second is equivalent and more explicit. For a single-device filesystem its typically not necessary to specify the devid though. $ btrfs filesystem resize max /path $ btrfs filesystem resize 1:max /path Lets assume that devid 1 exists and the filesystem does not occupy the whole block device, eg. it has been enlarged and we want to grow the filesystem. By simply using max as size we will achieve that. Note There are two ways to minimize the filesystem on a given device. The btrfs inspect-internal min-dev-size command, or iteratively shrink in steps. EXIT STATUS top btrfs filesystem returns a zero exit status if it succeeds. Non zero is returned in case of failure. AVAILABILITY top btrfs is part of btrfs-progs. Please refer to the btrfs wiki http://btrfs.wiki.kernel.org for further details. SEE ALSO top btrfs-subvolume(8), mkfs.btrfs(8), COLOPHON top This page is part of the btrfs-progs (btrfs filesystem tools) project. Information about the project can be found at https://btrfs.wiki.kernel.org/index.php/Btrfs_source_repositories. If you have a bug report for this manual page, see https://btrfs.wiki.kernel.org/index.php/Problem_FAQ#How_do_I_report_bugs_and_issues.3F. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/linux/kernel/git/kdave/btrfs-progs.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-14.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org Btrfs v5.16.1 02/06/2022 BTRFS-FILESYSTEM(8) Pages that refer to this page: btrfs(8), btrfs-replace(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # btrfs filesystem\n\n> Manage btrfs filesystems.\n> More information: <https://btrfs.readthedocs.io/en/latest/btrfs-filesystem.html>.\n\n- Show filesystem usage (optionally run as root to show detailed information):\n\n`btrfs filesystem usage {{path/to/btrfs_mount}}`\n\n- Show usage by individual devices:\n\n`sudo btrfs filesystem show {{path/to/btrfs_mount}}`\n\n- Defragment a single file on a btrfs filesystem (avoid while a deduplication agent is running):\n\n`sudo btrfs filesystem defragment -v {{path/to/file}}`\n\n- Defragment a directory recursively (does not cross subvolume boundaries):\n\n`sudo btrfs filesystem defragment -v -r {{path/to/directory}}`\n\n- Force syncing unwritten data blocks to disk(s):\n\n`sudo btrfs filesystem sync {{path/to/btrfs_mount}}`\n\n- Summarize disk usage for the files in a directory recursively:\n\n`sudo btrfs filesystem du --summarize {{path/to/directory}}`\n |
btrfs-inspect-internal | btrfs-inspect-internal(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training btrfs-inspect-internal(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | SUBCOMMAND | EXIT STATUS | AVAILABILITY | SEE ALSO | COLOPHON BTRFS-INSPECT-INTE(8) Btrfs Manual BTRFS-INSPECT-INTE(8) NAME top btrfs-inspect-internal - query various internal information SYNOPSIS top btrfs inspect-internal <subcommand> <args> DESCRIPTION top This command group provides an interface to query internal information. The functionality ranges from a simple UI to an ioctl or a more complex query that assembles the result from several internal structures. The latter usually requires calls to privileged ioctls. SUBCOMMAND top dump-super [options] <device> [device...] (replaces the standalone tool btrfs-show-super) Show btrfs superblock information stored on given devices in textual form. By default the first superblock is printed, more details about all copies or additional backup data can be printed. Besides verification of the filesystem signature, there are no other sanity checks. The superblock checksum status is reported, the device item and filesystem UUIDs are checked and reported. Note the meaning of option -s has changed in version 4.8 to be consistent with other tools to specify superblock copy rather the offset. The old way still works, but prints a warning. Please update your scripts to use --bytenr instead. The option -i has been deprecated. Options -f|--full print full superblock information, including the system chunk array and backup roots -a|--all print information about all present superblock copies (cannot be used together with -s option) -i <super> (deprecated since 4.8, same behaviour as --super) --bytenr <bytenr> specify offset to a superblock in a non-standard location at bytenr, useful for debugging (disables the -f option) If there are multiple options specified, only the last one applies. -F|--force attempt to print the superblock even if a valid BTRFS signature is not found; the result may be completely wrong if the data does not resemble a superblock -s|--super <bytenr> (see compatibility note above) specify which mirror to print, valid values are 0, 1 and 2 and the superblock must be present on the device with a valid signature, can be used together with --force dump-tree [options] <device> [device...] (replaces the standalone tool btrfs-debug-tree) Dump tree structures from a given device in textual form, expand keys to human readable equivalents where possible. This is useful for analyzing filesystem state or inconsistencies and has a positive educational effect on understanding the internal filesystem structure. Note contains file names, consider that if youre asked to send the dump for analysis. Does not contain file data. Options -e|--extents print only extent-related information: extent and device trees -d|--device print only device-related information: tree root, chunk and device trees -r|--roots print only short root node information, ie. the root tree keys -R|--backups same as --roots plus print backup root info, ie. the backup root keys and the respective tree root block offset -u|--uuid print only the uuid tree information, empty output if the tree does not exist -b <block_num> print info of the specified block only, can be specified multiple times --follow use with -b, print all children tree blocks of <block_num> --dfs (default up to 5.2) use depth-first search to print trees, the nodes and leaves are intermixed in the output --bfs (default since 5.3) use breadth-first search to print trees, the nodes are printed before all leaves --hide-names print a placeholder HIDDEN instead of various names, useful for developers to inspect the dump while keeping potentially sensitive information hidden This is: directory entries (files, directories, subvolumes) default subvolume extended attributes (name, value) hardlink names (if stored inside another item or as extended references in standalone items) Note lengths are not hidden because they can be calculated from the item size anyway. --csum-headers print b-tree node checksums stored in headers (metadata) --csum-items print checksums stored in checksum items (data) --noscan do not automatically scan the system for other devices from the same filesystem, only use the devices provided as the arguments -t <tree_id> print only the tree with the specified ID, where the ID can be numerical or common name in a flexible human readable form The tree id name recognition rules: case does not matter the C source definition, eg. BTRFS_ROOT_TREE_OBJECTID short forms without BTRFS_ prefix, without _TREE and _OBJECTID suffix, eg. ROOT_TREE, ROOT convenience aliases, eg. DEVICE for the DEV tree, CHECKSUM for CSUM unrecognized ID is an error inode-resolve [-v] <ino> <path> (needs root privileges) resolve paths to all files with given inode number ino in a given subvolume at path, ie. all hardlinks Options -v (deprecated) alias for global -v option logical-resolve [-Pvo] [-s <bufsize>] <logical> <path> (needs root privileges) resolve paths to all files at given logical address in the linear filesystem space Options -P skip the path resolving and print the inodes instead -o ignore offsets, find all references to an extent instead of a single block. Requires kernel support for the V2 ioctl (added in 4.15). The results might need further processing to filter out unwanted extents by the offset that is supposed to be obtained by other means. -s <bufsize> set internal buffer for storing the file names to bufsize, default is 64k, maximum 16m. Buffer sizes over 64K require kernel support for the V2 ioctl (added in 4.15). -v (deprecated) alias for global -v option min-dev-size [options] <path> (needs root privileges) return the minimum size the device can be shrunk to, without performing any resize operation, this may be useful before executing the actual resize operation Options --id <id> specify the device id to query, default is 1 if this option is not used rootid <path> for a given file or directory, return the containing tree root id, but for a subvolume itself return its own tree id (ie. subvol id) Note The result is undefined for the so-called empty subvolumes (identified by inode number 2), but such a subvolume does not contain any files anyway subvolid-resolve <subvolid> <path> (needs root privileges) resolve the absolute path of the subvolume id subvolid tree-stats [options] <device> (needs root privileges) Print sizes and statistics of trees. Options -b Print raw numbers in bytes. EXIT STATUS top btrfs inspect-internal returns a zero exit status if it succeeds. Non zero is returned in case of failure. AVAILABILITY top btrfs is part of btrfs-progs. Please refer to the btrfs wiki http://btrfs.wiki.kernel.org for further details. SEE ALSO top mkfs.btrfs(8) COLOPHON top This page is part of the btrfs-progs (btrfs filesystem tools) project. Information about the project can be found at https://btrfs.wiki.kernel.org/index.php/Btrfs_source_repositories. If you have a bug report for this manual page, see https://btrfs.wiki.kernel.org/index.php/Problem_FAQ#How_do_I_report_bugs_and_issues.3F. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/linux/kernel/git/kdave/btrfs-progs.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-14.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org Btrfs v5.16.1 02/06/2022 BTRFS-INSPECT-INTE(8) Pages that refer to this page: btrfs(8), btrfs-select-super(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # btrfs inspect-internal\n\n> Query internal information of a btrfs filesystem.\n> More information: <https://btrfs.readthedocs.io/en/latest/btrfs-inspect-internal.html>.\n\n- Print superblock's information:\n\n`sudo btrfs inspect-internal dump-super {{path/to/partition}}`\n\n- Print superblock's and all of its copies' information:\n\n`sudo btrfs inspect-internal dump-super --all {{path/to/partition}}`\n\n- Print filesystem's metadata information:\n\n`sudo btrfs inspect-internal dump-tree {{path/to/partition}}`\n\n- Print list of files in inode `n`-th:\n\n`sudo btrfs inspect-internal inode-resolve {{n}} {{path/to/btrfs_mount}}`\n\n- Print list of files at a given logical address:\n\n`sudo btrfs inspect-internal logical-resolve {{logical_address}} {{path/to/btrfs_mount}}`\n\n- Print stats of root, extent, csum and fs trees:\n\n`sudo btrfs inspect-internal tree-stats {{path/to/partition}}`\n |
btrfs-property | btrfs-property(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training btrfs-property(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | SUBCOMMAND | EXIT STATUS | AVAILABILITY | SEE ALSO | COLOPHON BTRFS-PROPERTY(8) Btrfs Manual BTRFS-PROPERTY(8) NAME top btrfs-property - get/set/list properties for given filesystem object SYNOPSIS top btrfs property <subcommand> <args> DESCRIPTION top btrfs property is used to get/set/list property for given filesystem object. The object can be an inode (file or directory), subvolume or the whole filesystem. See the description of get subcommand for more information about both btrfs object and property. btrfs property provides an unified and user-friendly method to tune different btrfs properties instead of using the traditional method like chattr(1) or lsattr(1). SUBCOMMAND top get [-t <type>] <object> [<name>] get property from a btrfs <object> of given <type> A btrfs object, which is set by <object>, can be a btrfs filesystem itself, a btrfs subvolume, an inode (file or directory) inside btrfs, or a device on which a btrfs exists. The option -t can be used to explicitly specify what type of object you meant. This is only needed when a property could be set for more then one object type. Possible types are s[ubvol], f[ilesystem], i[node] and d[evice], where the first lettes is a shortcut. Set the name of property by name. If no name is specified, all properties for the given object are printed. name is one of the following: ro read-only flag of subvolume: true or false. Please also see section SUBVOLUME FLAGS in btrfs-subvolume(8) for possible implications regarding incremental send. label label of the filesystem. For an unmounted filesystem, provide a path to a block device as object. For a mounted filesystem, specify a mount point. compression compression algorithm set for an inode, possible values: lzo, zlib, zstd. To disable compression use "" (empty string), no or none. list [-t <type>] <object> Lists available properties with their descriptions for the given object. See the description of get subcommand for the meaning of each option. set [-f] [-t <type>] <object> <name> <value> Sets a property on a btrfs object. See the description of get subcommand for the meaning of each option. Options -f Force the change. Changing some properties may involve safety checks or additional changes that depend on the properties semantics. EXIT STATUS top btrfs property returns a zero exit status if it succeeds. Non zero is returned in case of failure. AVAILABILITY top btrfs is part of btrfs-progs. Please refer to the btrfs wiki http://btrfs.wiki.kernel.org for further details. SEE ALSO top mkfs.btrfs(8), lsattr(1), chattr(1) COLOPHON top This page is part of the btrfs-progs (btrfs filesystem tools) project. Information about the project can be found at https://btrfs.wiki.kernel.org/index.php/Btrfs_source_repositories. If you have a bug report for this manual page, see https://btrfs.wiki.kernel.org/index.php/Problem_FAQ#How_do_I_report_bugs_and_issues.3F. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/linux/kernel/git/kdave/btrfs-progs.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-14.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org Btrfs v5.16.1 02/06/2022 BTRFS-PROPERTY(8) Pages that refer to this page: btrfs(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # btrfs property\n\n> Get, set, or list properties for a BTRFS filesystem object (files, directories, subvolumes, filesystems, or devices).\n> More information: <https://btrfs.readthedocs.io/en/latest/btrfs-property.html>.\n\n- List available properties (and descriptions) for the given btrfs object:\n\n`sudo btrfs property list {{path/to/btrfs_object}}`\n\n- Get all properties for the given btrfs object:\n\n`sudo btrfs property get {{path/to/btrfs_object}}`\n\n- Get the `label` property for the given btrfs filesystem or device:\n\n`sudo btrfs property get {{path/to/btrfs_filesystem}} label`\n\n- Get all object type-specific properties for the given btrfs filesystem or device:\n\n`sudo btrfs property get -t {{subvol|filesystem|inode|device}} {{path/to/btrfs_filesystem}}`\n\n- Set the `compression` property for a given btrfs inode (either a file or directory):\n\n`sudo btrfs property set {{path/to/btrfs_inode}} compression {{zstd|zlib|lzo|none}}`\n |
btrfs-rescue | btrfs-rescue(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training btrfs-rescue(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | SUBCOMMAND | EXIT STATUS | AVAILABILITY | SEE ALSO | COLOPHON BTRFS-RESCUE(8) Btrfs Manual BTRFS-RESCUE(8) NAME top btrfs-rescue - Recover a damaged btrfs filesystem SYNOPSIS top btrfs rescue <subcommand> <args> DESCRIPTION top btrfs rescue is used to try to recover a damaged btrfs filesystem. SUBCOMMAND top chunk-recover [options] <device> Recover the chunk tree by scanning the devices Options -y assume an answer of yes to all questions. -h help. -v (deprecated) alias for global -v option Note Since chunk-recover will scan the whole device, it will be VERY slow especially executed on a large device. fix-device-size <device> fix device size and super block total bytes values that are do not match Kernel 4.11 starts to check the device size more strictly and this might mismatch the stored value of total bytes. See the exact error message below. Newer kernel will refuse to mount the filesystem where the values do not match. This error is not fatal and can be fixed. This command will fix the device size values if possible. BTRFS error (device sdb): super_total_bytes 92017859088384 mismatch with fs_devices total_rw_bytes 92017859094528 The mismatch may also exhibit as a kernel warning: WARNING: CPU: 3 PID: 439 at fs/btrfs/ctree.h:1559 btrfs_update_device+0x1c5/0x1d0 [btrfs] clear-uuid-tree <device> Clear uuid tree, so that kernel can re-generate it at next read-write mount. Since kernel v4.16 there and more sanity check performed, and sometimes non-critical trees like uuid tree can cause problems and reject the mount. In such case, clearing uuid tree may make the filesystem to be mountable again without much risk as its built from other trees. super-recover [options] <device> Recover bad superblocks from good copies. Options -y assume an answer of yes to all questions. -v (deprecated) alias for global -v option zero-log <device> clear the filesystem log tree This command will clear the filesystem log tree. This may fix a specific set of problem when the filesystem mount fails due to the log replay. See below for sample stacktraces that may show up in system log. The common case where this happens was fixed a long time ago, so it is unlikely that you will see this particular problem, but the command is kept around. Note clearing the log may lead to loss of changes that were made since the last transaction commit. This may be up to 30 seconds (default commit period) or less if the commit was implied by other filesystem activity. One can determine whether zero-log is needed according to the kernel backtrace: ? replay_one_dir_item+0xb5/0xb5 [btrfs] ? walk_log_tree+0x9c/0x19d [btrfs] ? btrfs_read_fs_root_no_radix+0x169/0x1a1 [btrfs] ? btrfs_recover_log_trees+0x195/0x29c [btrfs] ? replay_one_dir_item+0xb5/0xb5 [btrfs] ? btree_read_extent_buffer_pages+0x76/0xbc [btrfs] ? open_ctree+0xff6/0x132c [btrfs] If the errors are like above, then zero-log should be used to clear the log and the filesystem may be mounted normally again. The keywords to look for are open_ctree which says that its during mount and function names that contain replay, recover or log_tree. EXIT STATUS top btrfs rescue returns a zero exit status if it succeeds. Non zero is returned in case of failure. AVAILABILITY top btrfs is part of btrfs-progs. Please refer to the btrfs wiki http://btrfs.wiki.kernel.org for further details. SEE ALSO top mkfs.btrfs(8), btrfs-scrub(8), btrfs-check(8) COLOPHON top This page is part of the btrfs-progs (btrfs filesystem tools) project. Information about the project can be found at https://btrfs.wiki.kernel.org/index.php/Btrfs_source_repositories. If you have a bug report for this manual page, see https://btrfs.wiki.kernel.org/index.php/Problem_FAQ#How_do_I_report_bugs_and_issues.3F. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/linux/kernel/git/kdave/btrfs-progs.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-14.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org Btrfs v5.16.1 02/06/2022 BTRFS-RESCUE(8) Pages that refer to this page: btrfs(8), btrfs-check(8), btrfs-restore(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # btrfs rescue\n\n> Try to recover a damaged btrfs filesystem.\n> More information: <https://btrfs.readthedocs.io/en/latest/btrfs-rescue.html>.\n\n- Rebuild the filesystem metadata tree (very slow):\n\n`sudo btrfs rescue chunk-recover {{path/to/partition}}`\n\n- Fix device size alignment related problems (e.g. unable to mount the filesystem with super total bytes mismatch):\n\n`sudo btrfs rescue fix-device-size {{path/to/partition}}`\n\n- Recover a corrupted superblock from correct copies (recover the root of filesystem tree):\n\n`sudo btrfs rescue super-recover {{path/to/partition}}`\n\n- Recover from an interrupted transactions (fixes log replay problems):\n\n`sudo btrfs rescue zero-log {{path/to/partition}}`\n\n- Create a `/dev/btrfs-control` control device when `mknod` is not installed:\n\n`sudo btrfs rescue create-control-device`\n |
btrfs-restore | btrfs-restore(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training btrfs-restore(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | EXIT STATUS | AVAILABILITY | SEE ALSO | COLOPHON BTRFS-RESTORE(8) Btrfs Manual BTRFS-RESTORE(8) NAME top btrfs-restore - try to restore files from a damaged btrfs filesystem image SYNOPSIS top btrfs restore [options] <device> <path> | -l <device> DESCRIPTION top btrfs restore is used to try to salvage files from a damaged filesystem and restore them into <path> or just list the subvolume tree roots. The filesystem image is not modified. If the filesystem is damaged and cannot be repaired by the other tools (btrfs-check(8) or btrfs-rescue(8)), btrfs restore could be used to retrieve file data, as far as the metadata are readable. The checks done by restore are less strict and the process is usually able to get far enough to retrieve data from the whole filesystem. This comes at a cost that some data might be incomplete or from older versions if theyre available. There are several options to attempt restoration of various file metadata type. You can try a dry run first to see how well the process goes and use further options to extend the set of restored metadata. For images with damaged tree structures, there are several options to point the process to some spare copy. Note It is recommended to read the following btrfs wiki page if your data is not salvaged with default option: https://btrfs.wiki.kernel.org/index.php/Restore OPTIONS top -s|--snapshots get also snapshots that are skipped by default -x|--xattr get extended attributes -m|--metadata restore owner, mode and times for files and directories -S|--symlinks restore symbolic links as well as normal files -i|--ignore-errors ignore errors during restoration and continue -o|--overwrite overwrite directories/files in <path>, eg. for repeated runs -t <bytenr> use <bytenr> to read the root tree -f <bytenr> only restore files that are under specified subvolume root pointed by <bytenr> -u|--super <mirror> use given superblock mirror identified by <mirror>, it can be 0,1 or 2 -r|--root <rootid> only restore files that are under a specified subvolume whose objectid is <rootid> -d find directory -l|--list-roots list subvolume tree roots, can be used as argument for -r -D|--dry-run dry run (only list files that would be recovered) --path-regex <regex> restore only filenames matching a regular expression ( regex(7)) with a mandatory format ^/(|home(|/username(|/Desktop(|/.*))))$ The format is not very comfortable and restores all files in the directories in the whole path, so this is not useful for restoring single file in a deep hierarchy. -c ignore case (--path-regex only) -v|--verbose (deprecated) alias for global -v option Global options -v|--verbose be verbose and print what is being restored EXIT STATUS top btrfs restore returns a zero exit status if it succeeds. Non zero is returned in case of failure. AVAILABILITY top btrfs is part of btrfs-progs. Please refer to the btrfs wiki http://btrfs.wiki.kernel.org for further details. SEE ALSO top mkfs.btrfs(8), btrfs-rescue(8), btrfs-check(8) COLOPHON top This page is part of the btrfs-progs (btrfs filesystem tools) project. Information about the project can be found at https://btrfs.wiki.kernel.org/index.php/Btrfs_source_repositories. If you have a bug report for this manual page, see https://btrfs.wiki.kernel.org/index.php/Problem_FAQ#How_do_I_report_bugs_and_issues.3F. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/linux/kernel/git/kdave/btrfs-progs.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-14.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org Btrfs v5.16.1 02/06/2022 BTRFS-RESTORE(8) Pages that refer to this page: btrfs(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # btrfs restore\n\n> Try to salvage files from a damaged btrfs filesystem.\n> More information: <https://btrfs.readthedocs.io/en/latest/btrfs-restore.html>.\n\n- Restore all files from a btrfs filesystem to a given directory:\n\n`sudo btrfs restore {{path/to/btrfs_device}} {{path/to/target_directory}}`\n\n- List (don't write) files to be restored from a btrfs filesystem:\n\n`sudo btrfs restore --dry-run {{path/to/btrfs_device}} {{path/to/target_directory}}`\n\n- Restore files matching a given regex ([c]ase-insensitive) files to be restored from a btrfs filesystem (all parent directories of target file(s) must match as well):\n\n`sudo btrfs restore --path-regex {{regex}} -c {{path/to/btrfs_device}} {{path/to/target_directory}}`\n\n- Restore files from a btrfs filesystem using a specific root tree `bytenr` (see `btrfs-find-root`):\n\n`sudo btrfs restore -t {{bytenr}} {{path/to/btrfs_device}} {{path/to/target_directory}}`\n\n- Restore files from a btrfs filesystem (along with metadata, extended attributes, and Symlinks), overwriting files in the target:\n\n`sudo btrfs restore --metadata --xattr --symlinks --overwrite {{path/to/btrfs_device}} {{path/to/target_directory}}`\n |
btrfs-scrub | btrfs-scrub(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training btrfs-scrub(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | SUBCOMMAND | EXIT STATUS | AVAILABILITY | SEE ALSO | COLOPHON BTRFS-SCRUB(8) Btrfs Manual BTRFS-SCRUB(8) NAME top btrfs-scrub - scrub btrfs filesystem, verify block checksums SYNOPSIS top btrfs scrub <subcommand> <args> DESCRIPTION top btrfs scrub is used to scrub a mounted btrfs filesystem, which will read all data and metadata blocks from all devices and verify checksums. Automatically repair corrupted blocks if theres a correct copy available. Note Scrub is not a filesystem checker (fsck) and does not verify nor repair structural damage in the filesystem. It really only checks checksums of data and tree blocks, it doesnt ensure the content of tree blocks is valid and consistent. Theres some validation performed when metadata blocks are read from disk but its not extensive and cannot substitute full btrfs check run. The user is supposed to run it manually or via a periodic system service. The recommended period is a month but could be less. The estimated device bandwidth utilization is about 80% on an idle filesystem. The IO priority class is by default idle so background scrub should not significantly interfere with normal filesystem operation. The IO scheduler set for the device(s) might not support the priority classes though. The scrubbing status is recorded in /var/lib/btrfs/ in textual files named scrub.status.UUID for a filesystem identified by the given UUID. (Progress state is communicated through a named pipe in file scrub.progress.UUID in the same directory.) The status file is updated every 5 seconds. A resumed scrub will continue from the last saved position. Scrub can be started only on a mounted filesystem, though its possible to scrub only a selected device. See scrub start for more. SUBCOMMAND top cancel <path>|<device> If a scrub is running on the filesystem identified by path or device, cancel it. If a device is specified, the corresponding filesystem is found and btrfs scrub cancel behaves as if it was called on that filesystem. The progress is saved in the status file so btrfs scrub resume can continue from the last position. resume [-BdqrR] [-c <ioprio_class> -n <ioprio_classdata>] <path>|<device> Resume a cancelled or interrupted scrub on the filesystem identified by path or on a given device. The starting point is read from the status file if it exists. This does not start a new scrub if the last scrub finished successfully. Options see scrub start. start [-BdqrRf] [-c <ioprio_class> -n <ioprio_classdata>] <path>|<device> Start a scrub on all devices of the mounted filesystem identified by path or on a single device. If a scrub is already running, the new one will not start. A device of an unmounted filesystem cannot be scrubbed this way. Without options, scrub is started as a background process. The automatic repairs of damaged copies is performed by default for block group profiles with redundancy. The default IO priority of scrub is the idle class. The priority can be configured similar to the ionice(1) syntax using -c and -n options. Note that not all IO schedulers honor the ionice settings. Options -B do not background and print scrub statistics when finished -d print separate statistics for each device of the filesystem (-B only) at the end -r run in read-only mode, do not attempt to correct anything, can be run on a read-only filesystem -R raw print mode, print full data instead of summary -c <ioprio_class> set IO priority class (see ionice(1) manpage) -n <ioprio_classdata> set IO priority classdata (see ionice(1) manpage) -f force starting new scrub even if a scrub is already running, this can useful when scrub status file is damaged and reports a running scrub although it is not, but should not normally be necessary -q (deprecated) alias for global -q option status [options] <path>|<device> Show status of a running scrub for the filesystem identified by path or for the specified device. If no scrub is running, show statistics of the last finished or cancelled scrub for that filesystem or device. Options -d print separate statistics for each device of the filesystem -R print all raw statistics without postprocessing as returned by the status ioctl --raw print all numbers raw values in bytes without the B suffix --human-readable print human friendly numbers, base 1024, this is the default --iec select the 1024 base for the following options, according to the IEC standard --si select the 1000 base for the following options, according to the SI standard --kbytes show sizes in KiB, or kB with --si --mbytes show sizes in MiB, or MB with --si --gbytes show sizes in GiB, or GB with --si --tbytes show sizes in TiB, or TB with --si EXIT STATUS top btrfs scrub returns a zero exit status if it succeeds. Non zero is returned in case of failure: 1 scrub couldnt be performed 2 there is nothing to resume 3 scrub found uncorrectable errors AVAILABILITY top btrfs is part of btrfs-progs. Please refer to the btrfs wiki http://btrfs.wiki.kernel.org for further details. SEE ALSO top mkfs.btrfs(8), ionice(1) COLOPHON top This page is part of the btrfs-progs (btrfs filesystem tools) project. Information about the project can be found at https://btrfs.wiki.kernel.org/index.php/Btrfs_source_repositories. If you have a bug report for this manual page, see https://btrfs.wiki.kernel.org/index.php/Problem_FAQ#How_do_I_report_bugs_and_issues.3F. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/linux/kernel/git/kdave/btrfs-progs.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-14.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org Btrfs v5.16.1 02/06/2022 BTRFS-SCRUB(8) Pages that refer to this page: btrfs(8), btrfs-check(8), btrfs-rescue(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # btrfs scrub\n\n> Scrub btrfs filesystems to verify data integrity.\n> It is recommended to run a scrub once a month.\n> More information: <https://btrfs.readthedocs.io/en/latest/btrfs-scrub.html>.\n\n- Start a scrub:\n\n`sudo btrfs scrub start {{path/to/btrfs_mount}}`\n\n- Show the status of an ongoing or last completed scrub:\n\n`sudo btrfs scrub status {{path/to/btrfs_mount}}`\n\n- Cancel an ongoing scrub:\n\n`sudo btrfs scrub cancel {{path/to/btrfs_mount}}`\n\n- Resume a previously cancelled scrub:\n\n`sudo btrfs scrub resume {{path/to/btrfs_mount}}`\n\n- Start a scrub, but wait until the scrub finishes before exiting:\n\n`sudo btrfs scrub start -B {{path/to/btrfs_mount}}`\n\n- Start a scrub in quiet mode (does not print errors or statistics):\n\n`sudo btrfs scrub start -q {{path/to/btrfs_mount}}`\n |
btrfs-subvolume | btrfs-subvolume(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training btrfs-subvolume(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | SUBVOLUME AND SNAPSHOT | SUBCOMMAND | SUBVOLUME FLAGS | EXAMPLES | EXIT STATUS | AVAILABILITY | SEE ALSO | COLOPHON BTRFS-SUBVOLUME(8) Btrfs Manual BTRFS-SUBVOLUME(8) NAME top btrfs-subvolume - manage btrfs subvolumes SYNOPSIS top btrfs subvolume <subcommand> [<args>] DESCRIPTION top btrfs subvolume is used to create/delete/list/show btrfs subvolumes and snapshots. SUBVOLUME AND SNAPSHOT top A subvolume is a part of filesystem with its own independent file/directory hierarchy. Subvolumes can share file extents. A snapshot is also subvolume, but with a given initial content of the original subvolume. Note A subvolume in btrfs is not like an LVM logical volume, which is block-level snapshot while btrfs subvolumes are file extent-based. A subvolume looks like a normal directory, with some additional operations described below. Subvolumes can be renamed or moved, nesting subvolumes is not restricted but has some implications regarding snapshotting. A subvolume in btrfs can be accessed in two ways: like any other directory that is accessible to the user like a separately mounted filesystem (options subvol or subvolid) In the latter case the parent directory is not visible and accessible. This is similar to a bind mount, and in fact the subvolume mount does exactly that. A freshly created filesystem is also a subvolume, called top-level, internally has an id 5. This subvolume cannot be removed or replaced by another subvolume. This is also the subvolume that will be mounted by default, unless the default subvolume has been changed (see subcommand set-default). A snapshot is a subvolume like any other, with given initial content. By default, snapshots are created read-write. File modifications in a snapshot do not affect the files in the original subvolume. SUBCOMMAND top create [-i <qgroupid>] [<dest>/]<name> Create a subvolume <name> in <dest>. If <dest> is not given, subvolume <name> will be created in the current directory. Options -i <qgroupid> Add the newly created subvolume to a qgroup. This option can be given multiple times. delete [options] <[<subvolume> [<subvolume>...]], delete -i|--subvolid <subvolid> <path>> Delete the subvolume(s) from the filesystem. If <subvolume> is not a subvolume, btrfs returns an error but continues if there are more arguments to process. If --subvolid is used, <path> must point to a btrfs filesystem. See btrfs subvolume list or btrfs inspect-internal rootid how to get the subvolume id. The corresponding directory is removed instantly but the data blocks are removed later in the background. The command returns immediately. See btrfs subvolume sync how to wait until the subvolume gets completely removed. The deletion does not involve full transaction commit by default due to performance reasons. As a consequence, the subvolume may appear again after a crash. Use one of the --commit options to wait until the operation is safely stored on the device. The default subvolume (see btrfs subvolume set-default) cannot be deleted and returns error (EPERM) and this is logged to the system log. A subvolume thats currently involved in send (see btrfs send) also cannot be deleted until the send is finished. This is also logged in the system log. Options -c|--commit-after wait for transaction commit at the end of the operation. -C|--commit-each wait for transaction commit after deleting each subvolume. -i|--subvolid <subvolid> subvolume id to be removed instead of the <path> that should point to the filesystem with the subvolume -v|--verbose (deprecated) alias for global -v option find-new <subvolume> <last_gen> List the recently modified files in a subvolume, after <last_gen> generation. get-default <path> Get the default subvolume of the filesystem <path>. The output format is similar to subvolume list command. list [options] [-G [+|-]<value>] [-C [+|-]<value>] [--sort=rootid,gen,ogen,path] <path> List the subvolumes present in the filesystem <path>. For every subvolume the following information is shown by default: ID <ID> gen <generation> top level <ID> path <path> where ID is subvolumes id, gen is an internal counter which is updated every transaction, top level is the same as parent subvolumes id, and path is the relative path of the subvolume to the top level subvolume. The subvolumes ID may be used by the subvolume set-default command, or at mount time via the subvolid= option. Options Path filtering -o print only subvolumes below specified <path>. -a print all the subvolumes in the filesystem and distinguish between absolute and relative path with respect to the given <path>. Field selection -p print the parent ID (parent here means the subvolume which contains this subvolume). -c print the ogeneration of the subvolume, aliases: ogen or origin generation. -g print the generation of the subvolume (default). -u print the UUID of the subvolume. -q print the parent UUID of the subvolume (parent here means subvolume of which this subvolume is a snapshot). -R print the UUID of the sent subvolume, where the subvolume is the result of a receive operation. Type filtering -s only snapshot subvolumes in the filesystem will be listed. -r only readonly subvolumes in the filesystem will be listed. -d list deleted subvolumes that are not yet cleaned. Other -t print the result as a table. Sorting By default the subvolumes will be sorted by subvolume ID ascending. -G [+|-]<value> list subvolumes in the filesystem that its generation is >=, or = value. '+' means >= value, '-' means <= value, If there is neither '+' nor '-', it means = value. -C [+|-]<value> list subvolumes in the filesystem that its ogeneration is >=, <= or = value. The usage is the same to -G option. --sort=rootid,gen,ogen,path list subvolumes in order by specified items. you can add '+' or '-' in front of each items, '+' means ascending, '-' means descending. The default is ascending. for --sort you can combine some items together by ',', just like --sort=+ogen,-gen,path,rootid. set-default [<subvolume>|<id> <path>] Set the default subvolume for the (mounted) filesystem. Set the default subvolume for the (mounted) filesystem at <path>. This will hide the top-level subvolume (i.e. the one mounted with subvol=/ or subvolid=5). Takes action on next mount. There are two ways how to specify the subvolume, by <id> or by the <subvolume> path. The id can be obtained from btrfs subvolume list, btrfs subvolume show or btrfs inspect-internal rootid. show [options] <path> Show more information about a subvolume (UUIDs, generations, times, flags, related snapshots). /mnt/btrfs/subvolume Name: subvolume UUID: 5e076a14-4e42-254d-ac8e-55bebea982d1 Parent UUID: - Received UUID: - Creation time: 2018-01-01 12:34:56 +0000 Subvolume ID: 79 Generation: 2844 Gen at creation: 2844 Parent ID: 5 Top level ID: 5 Flags: - Snapshot(s): Options -r|--rootid <ID> show details about subvolume with root <ID>, looked up in <path> -u|--uuid UUID show details about subvolume with the given <UUID>, looked up in <path> snapshot [-r] [-i <qgroupid>] <source> <dest>|[<dest>/]<name> Create a snapshot of the subvolume <source> with the name <name> in the <dest> directory. If only <dest> is given, the subvolume will be named the basename of <source>. If <source> is not a subvolume, btrfs returns an error. Options -r Make the new snapshot read only. -i <qgroupid> Add the newly created subvolume to a qgroup. This option can be given multiple times. sync <path> [subvolid...] Wait until given subvolume(s) are completely removed from the filesystem after deletion. If no subvolume id is given, wait until all current deletion requests are completed, but do not wait for subvolumes deleted in the meantime. Options -s <N> sleep N seconds between checks (default: 1) SUBVOLUME FLAGS top The subvolume flag currently implemented is the ro property. Read-write subvolumes have that set to false, snapshots as true. In addition to that, a plain snapshot will also have last change generation and creation generation equal. Read-only snapshots are building blocks fo incremental send (see btrfs-send(8)) and the whole use case relies on unmodified snapshots where the relative changes are generated from. Thus, changing the subvolume flags from read-only to read-write will break the assumptions and may lead to unexpected changes in the resulting incremental stream. A snapshot that was created by send/receive will be read-only, with different last change generation, read-only and with set received_uuid which identifies the subvolume on the filesystem that produced the stream. The usecase relies on matching data on both sides. Changing the subvolume to read-write after it has been received requires to reset the received_uuid. As this is a notable change and could potentially break the incremental send use case, performing it by btrfs property set requires force if that is really desired by user. Note The safety checks have been implemented in 5.14.2, any subvolumes previously received (with a valid received_uuid) and read-write status may exist and could still lead to problems with send/receive. You can use btrfs subvolume show to identify them. Flipping the flags to read-only and back to read-write will reset the received_uuid manually. There may exist a convenience tool in the future. EXAMPLES top Example 1. Deleting a subvolume If we want to delete a subvolume called foo from a btrfs volume mounted at /mnt/bar we could run the following: btrfs subvolume delete /mnt/bar/foo EXIT STATUS top btrfs subvolume returns a zero exit status if it succeeds. A non-zero value is returned in case of failure. AVAILABILITY top btrfs is part of btrfs-progs. Please refer to the btrfs wiki http://btrfs.wiki.kernel.org for further details. SEE ALSO top mkfs.btrfs(8), mount(8), btrfs-quota(8), btrfs-qgroup(8), btrfs-send(8) COLOPHON top This page is part of the btrfs-progs (btrfs filesystem tools) project. Information about the project can be found at https://btrfs.wiki.kernel.org/index.php/Btrfs_source_repositories. If you have a bug report for this manual page, see https://btrfs.wiki.kernel.org/index.php/Problem_FAQ#How_do_I_report_bugs_and_issues.3F. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/linux/kernel/git/kdave/btrfs-progs.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-14.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org Btrfs v5.16.1 02/06/2022 BTRFS-SUBVOLUME(8) Pages that refer to this page: tmpfiles.d(5), btrfs(8), btrfs-filesystem(8), btrfs-property(8), btrfs-qgroup(8), btrfs-quota(8), btrfs-send(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # btrfs subvolume\n\n> Manage btrfs subvolumes and snapshots.\n> More information: <https://btrfs.readthedocs.io/en/latest/btrfs-subvolume.html>.\n\n- Create a new empty subvolume:\n\n`sudo btrfs subvolume create {{path/to/new_subvolume}}`\n\n- List all subvolumes and snapshots in the specified filesystem:\n\n`sudo btrfs subvolume list {{path/to/btrfs_filesystem}}`\n\n- Delete a subvolume:\n\n`sudo btrfs subvolume delete {{path/to/subvolume}}`\n\n- Create a read-only snapshot of an existing subvolume:\n\n`sudo btrfs subvolume snapshot -r {{path/to/source_subvolume}} {{path/to/target}}`\n\n- Create a read-write snapshot of an existing subvolume:\n\n`sudo btrfs subvolume snapshot {{path/to/source_subvolume}} {{path/to/target}}`\n\n- Show detailed information about a subvolume:\n\n`sudo btrfs subvolume show {{path/to/subvolume}}`\n |
busctl | busctl(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training busctl(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | COMMANDS | OPTIONS | PARAMETER FORMATTING | EXAMPLES | SEE ALSO | NOTES | COLOPHON BUSCTL(1) busctl BUSCTL(1) NAME top busctl - Introspect the bus SYNOPSIS top busctl [OPTIONS...] [COMMAND] [NAME...] DESCRIPTION top busctl may be used to introspect and monitor the D-Bus bus. COMMANDS top The following commands are understood: list Show all peers on the bus, by their service names. By default, shows both unique and well-known names, but this may be changed with the --unique and --acquired switches. This is the default operation if no command is specified. Added in version 209. status [SERVICE] Show process information and credentials of a bus service (if one is specified by its unique or well-known name), a process (if one is specified by its numeric PID), or the owner of the bus (if no parameter is specified). Added in version 209. monitor [SERVICE...] Dump messages being exchanged. If SERVICE is specified, show messages to or from this peer, identified by its well-known or unique name. Otherwise, show all messages on the bus. Use Ctrl+C to terminate the dump. Added in version 209. capture [SERVICE...] Similar to monitor but writes the output in pcapng format (for details, see PCAP Next Generation (pcapng) Capture File Format[1]). Make sure to redirect standard output to a file or pipe. Tools like wireshark(1) may be used to dissect and view the resulting files. Added in version 218. tree [SERVICE...] Shows an object tree of one or more services. If SERVICE is specified, show object tree of the specified services only. Otherwise, show all object trees of all services on the bus that acquired at least one well-known name. Added in version 218. introspect SERVICE OBJECT [INTERFACE] Show interfaces, methods, properties and signals of the specified object (identified by its path) on the specified service. If the interface argument is passed, the output is limited to members of the specified interface. Added in version 218. call SERVICE OBJECT INTERFACE METHOD [SIGNATURE [ARGUMENT...]] Invoke a method and show the response. Takes a service name, object path, interface name and method name. If parameters shall be passed to the method call, a signature string is required, followed by the arguments, individually formatted as strings. For details on the formatting used, see below. To suppress output of the returned data, use the --quiet option. Added in version 218. emit OBJECT INTERFACE SIGNAL [SIGNATURE [ARGUMENT...]] Emit a signal. Takes an object path, interface name and method name. If parameters shall be passed, a signature string is required, followed by the arguments, individually formatted as strings. For details on the formatting used, see below. To specify the destination of the signal, use the --destination= option. Added in version 242. get-property SERVICE OBJECT INTERFACE PROPERTY... Retrieve the current value of one or more object properties. Takes a service name, object path, interface name and property name. Multiple properties may be specified at once, in which case their values will be shown one after the other, separated by newlines. The output is, by default, in terse format. Use --verbose for a more elaborate output format. Added in version 218. set-property SERVICE OBJECT INTERFACE PROPERTY SIGNATURE ARGUMENT... Set the current value of an object property. Takes a service name, object path, interface name, property name, property signature, followed by a list of parameters formatted as strings. Added in version 218. help Show command syntax help. Added in version 209. OPTIONS top The following options are understood: --address=ADDRESS Connect to the bus specified by ADDRESS instead of using suitable defaults for either the system or user bus (see --system and --user options). Added in version 209. --show-machine When showing the list of peers, show a column containing the names of containers they belong to. See systemd-machined.service(8). Added in version 209. --unique When showing the list of peers, show only "unique" names (of the form ":number.number"). Added in version 209. --acquired The opposite of --unique only "well-known" names will be shown. Added in version 209. --activatable When showing the list of peers, show only peers which have actually not been activated yet, but may be started automatically if accessed. Added in version 209. --match=MATCH When showing messages being exchanged, show only the subset matching MATCH. See sd_bus_add_match(3). Added in version 209. --size= When used with the capture command, specifies the maximum bus message size to capture ("snaplen"). Defaults to 4096 bytes. Added in version 218. --list When used with the tree command, shows a flat list of object paths instead of a tree. Added in version 218. -q, --quiet When used with the call command, suppresses display of the response message payload. Note that even if this option is specified, errors returned will still be printed and the tool will indicate success or failure with the process exit code. Added in version 218. --verbose When used with the call or get-property command, shows output in a more verbose format. Added in version 218. --xml-interface When used with the introspect call, dump the XML description received from the D-Bus org.freedesktop.DBus.Introspectable.Introspect call instead of the normal output. Added in version 243. --json=MODE When used with the call or get-property command, shows output formatted as JSON. Expects one of "short" (for the shortest possible output without any redundant whitespace or line breaks) or "pretty" (for a pretty version of the same, with indentation and line breaks). Note that transformation from D-Bus marshalling to JSON is done in a loss-less way, which means type information is embedded into the JSON object tree. Added in version 240. -j Equivalent to --json=pretty when invoked interactively from a terminal. Otherwise equivalent to --json=short, in particular when the output is piped to some other program. Added in version 240. --expect-reply=BOOL When used with the call command, specifies whether busctl shall wait for completion of the method call, output the returned method response data, and return success or failure via the process exit code. If this is set to "no", the method call will be issued but no response is expected, the tool terminates immediately, and thus no response can be shown, and no success or failure is returned via the exit code. To only suppress output of the reply message payload, use --quiet above. Defaults to "yes". Added in version 218. --auto-start=BOOL When used with the call or emit command, specifies whether the method call should implicitly activate the called service, should it not be running yet but is configured to be auto-started. Defaults to "yes". Added in version 218. --allow-interactive-authorization=BOOL When used with the call command, specifies whether the services may enforce interactive authorization while executing the operation, if the security policy is configured for this. Defaults to "yes". Added in version 218. --timeout=SECS When used with the call command, specifies the maximum time to wait for method call completion. If no time unit is specified, assumes seconds. The usual other units are understood, too (ms, us, s, min, h, d, w, month, y). Note that this timeout does not apply if --expect-reply=no is used, as the tool does not wait for any reply message then. When not specified or when set to 0, the default of "25s" is assumed. Added in version 218. --augment-creds=BOOL Controls whether credential data reported by list or status shall be augmented with data from /proc/. When this is turned on, the data shown is possibly inconsistent, as the data read from /proc/ might be more recent than the rest of the credential information. Defaults to "yes". Added in version 218. --watch-bind=BOOL Controls whether to wait for the specified AF_UNIX bus socket to appear in the file system before connecting to it. Defaults to off. When enabled, the tool will watch the file system until the socket is created and then connect to it. Added in version 237. --destination=SERVICE Takes a service name. When used with the emit command, a signal is emitted to the specified service. Added in version 242. --user Talk to the service manager of the calling user, rather than the service manager of the system. --system Talk to the service manager of the system. This is the implied default. -H, --host= Execute the operation remotely. Specify a hostname, or a username and hostname separated by "@", to connect to. The hostname may optionally be suffixed by a port ssh is listening on, separated by ":", and then a container name, separated by "/", which connects directly to a specific container on the specified host. This will use SSH to talk to the remote machine manager instance. Container names may be enumerated with machinectl -H HOST. Put IPv6 addresses in brackets. -M, --machine= Execute operation on a local container. Specify a container name to connect to, optionally prefixed by a user name to connect as and a separating "@" character. If the special string ".host" is used in place of the container name, a connection to the local system is made (which is useful to connect to a specific user's user bus: "--user --machine=lennart@.host"). If the "@" syntax is not used, the connection is made as root user. If the "@" syntax is used either the left hand side or the right hand side may be omitted (but not both) in which case the local user name and ".host" are implied. -l, --full Do not ellipsize the output in list command. Added in version 245. --no-pager Do not pipe output into a pager. --no-legend Do not print the legend, i.e. column headers and the footer with hints. -h, --help Print a short help text and exit. --version Print a short version string and exit. PARAMETER FORMATTING top The call and set-property commands take a signature string followed by a list of parameters formatted as string (for details on D-Bus signature strings, see the Type system chapter of the D-Bus specification[2]). For simple types, each parameter following the signature should simply be the parameter's value formatted as string. Positive boolean values may be formatted as "true", "yes", "on", or "1"; negative boolean values may be specified as "false", "no", "off", or "0". For arrays, a numeric argument for the number of entries followed by the entries shall be specified. For variants, the signature of the contents shall be specified, followed by the contents. For dictionaries and structs, the contents of them shall be directly specified. For example, s jawoll is the formatting of a single string "jawoll". as 3 hello world foobar is the formatting of a string array with three entries, "hello", "world" and "foobar". a{sv} 3 One s Eins Two u 2 Yes b true is the formatting of a dictionary array that maps strings to variants, consisting of three entries. The string "One" is assigned the string "Eins". The string "Two" is assigned the 32-bit unsigned integer 2. The string "Yes" is assigned a positive boolean. Note that the call, get-property, introspect commands will also generate output in this format for the returned data. Since this format is sometimes too terse to be easily understood, the call and get-property commands may generate a more verbose, multi-line output when passed the --verbose option. EXAMPLES top Example 1. Write and Read a Property The following two commands first write a property and then read it back. The property is found on the "/org/freedesktop/systemd1" object of the "org.freedesktop.systemd1" service. The name of the property is "LogLevel" on the "org.freedesktop.systemd1.Manager" interface. The property contains a single string: # busctl set-property org.freedesktop.systemd1 /org/freedesktop/systemd1 org.freedesktop.systemd1.Manager LogLevel s debug # busctl get-property org.freedesktop.systemd1 /org/freedesktop/systemd1 org.freedesktop.systemd1.Manager LogLevel s "debug" Example 2. Terse and Verbose Output The following two commands read a property that contains an array of strings, and first show it in terse format, followed by verbose format: $ busctl get-property org.freedesktop.systemd1 /org/freedesktop/systemd1 org.freedesktop.systemd1.Manager Environment as 2 "LANG=en_US.UTF-8" "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin" $ busctl get-property --verbose org.freedesktop.systemd1 /org/freedesktop/systemd1 org.freedesktop.systemd1.Manager Environment ARRAY "s" { STRING "LANG=en_US.UTF-8"; STRING "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin"; }; Example 3. Invoking a Method The following command invokes the "StartUnit" method on the "org.freedesktop.systemd1.Manager" interface of the "/org/freedesktop/systemd1" object of the "org.freedesktop.systemd1" service, and passes it two strings "cups.service" and "replace". As a result of the method call, a single object path parameter is received and shown: # busctl call org.freedesktop.systemd1 /org/freedesktop/systemd1 org.freedesktop.systemd1.Manager StartUnit ss "cups.service" "replace" o "/org/freedesktop/systemd1/job/42684" SEE ALSO top dbus-daemon(1), D-Bus[3], sd-bus(3), varlinkctl(1), systemd(1), machinectl(1), wireshark(1) NOTES top 1. PCAP Next Generation (pcapng) Capture File Format https://github.com/pcapng/pcapng/ 2. Type system chapter of the D-Bus specification https://dbus.freedesktop.org/doc/dbus-specification.html#type-system 3. D-Bus https://www.freedesktop.org/wiki/Software/dbus COLOPHON top This page is part of the systemd (systemd system and service manager) project. Information about the project can be found at http://www.freedesktop.org/wiki/Software/systemd. If you have a bug report for this manual page, see http://www.freedesktop.org/wiki/Software/systemd/#bugreports. This page was obtained from the project's upstream Git repository https://github.com/systemd/systemd.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-22.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org systemd 255 BUSCTL(1) Pages that refer to this page: varlinkctl(1), sd-bus(3), sd_bus_add_node_enumerator(3), sd_bus_add_object(3), sd_bus_add_object_manager(3), sd_bus_emit_signal(3), systemd.directives(7), systemd.index(7) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # busctl\n\n> Introspect and monitor the D-Bus bus.\n> More information: <https://www.freedesktop.org/software/systemd/man/busctl.html>.\n\n- Show all peers on the bus, by their service names:\n\n`busctl list`\n\n- Show process information and credentials of a bus service, a process, or the owner of the bus (if no parameter is specified):\n\n`busctl status {{service|pid}}`\n\n- Dump messages being exchanged. If no service is specified, show all messages on the bus:\n\n`busctl monitor {{service1 service2 ...}}`\n\n- Show an object tree of one or more services (or all services if no service is specified):\n\n`busctl tree {{service1 service2 ...}}`\n\n- Show interfaces, methods, properties and signals of the specified object on the specified service:\n\n`busctl introspect {{service}} {{path/to/object}}`\n\n- Retrieve the current value of one or more object properties:\n\n`busctl get-property {{service}} {{path/to/object}} {{interface_name}} {{property_name}}`\n\n- Invoke a method and show the response:\n\n`busctl call {{service}} {{path/to/object}} {{interface_name}} {{method_name}}`\n |
c99 | c99(1p) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training c99(1p) Linux manual page PROLOG | NAME | SYNOPSIS | DESCRIPTION | OPTIONS | OPERANDS | STDIN | INPUT FILES | ENVIRONMENT VARIABLES | ASYNCHRONOUS EVENTS | STDOUT | STDERR | OUTPUT FILES | EXTENDED DESCRIPTION | EXIT STATUS | CONSEQUENCES OF ERRORS | APPLICATION USAGE | EXAMPLES | RATIONALE | FUTURE DIRECTIONS | SEE ALSO | COPYRIGHT C99(1P) POSIX Programmer's Manual C99(1P) PROLOG top This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux. NAME top c99 compile standard C programs SYNOPSIS top c99 [options...] pathname [[pathname] [-I directory] [-L directory] [-l library]]... DESCRIPTION top The c99 utility is an interface to the standard C compilation system; it shall accept source code conforming to the ISO C standard. The system conceptually consists of a compiler and link editor. The input files referenced by pathname operands and -l option-arguments shall be compiled and linked to produce an executable file. (It is unspecified whether the linking occurs entirely within the operation of c99; some implementations may produce objects that are not fully resolved until the file is executed.) If the -c option is specified, for all pathname operands of the form file.c, the files: $(basename pathname .c).o shall be created as the result of successful compilation. If the -c option is not specified, it is unspecified whether such .o files are created or deleted for the file.c operands. If there are no options that prevent link editing (such as -c or -E), and all input files compile and link without error, the resulting executable file shall be written according to the -o outfile option (if present) or to the file a.out. The executable file shall be created as specified in Section 1.1.1.4, File Read, Write, and Creation, except that the file permission bits shall be set to: S_IRWXO | S_IRWXG | S_IRWXU and the bits specified by the umask of the process shall be cleared. OPTIONS top The c99 utility shall conform to the Base Definitions volume of POSIX.12017, Section 12.2, Utility Syntax Guidelines, except that: * Options can be interspersed with operands. * The order of specifying the -L and -l options, and the order of specifying -l options with respect to pathname operands is significant. * Conforming applications shall specify each option separately; that is, grouping option letters (for example, -cO) need not be recognized by all implementations. The following options shall be supported: -c Suppress the link-edit phase of the compilation, and do not remove any object files that are produced. -D name[=value] Define name as if by a C-language #define directive. If no =value is given, a value of 1 shall be used. The -D option has lower precedence than the -U option. That is, if name is used in both a -U and a -D option, name shall be undefined regardless of the order of the options. Additional implementation-defined names may be provided by the compiler. Implementations shall support at least 2048 bytes of -D definitions and 256 names. -E Copy C-language source files to standard output, executing all preprocessor directives; no compilation shall be performed. If any operand is not a text file, the effects are unspecified. -g Produce symbolic information in the object or executable files; the nature of this information is unspecified, and may be modified by implementation- defined interactions with other options. -I directory Change the algorithm for searching for headers whose names are not absolute pathnames to look in the directory named by the directory pathname before looking in the usual places. Thus, headers whose names are enclosed in double-quotes ("") shall be searched for first in the directory of the file with the #include line, then in directories named in -I options, and last in the usual places. For headers whose names are enclosed in angle brackets ("<>"), the header shall be searched for only in directories named in -I options and then in the usual places. Directories named in -I options shall be searched in the order specified. If the -I option is used to specify a directory that is one of the usual places searched by default, the results are unspecified. Implementations shall support at least ten instances of this option in a single c99 command invocation. -L directory Change the algorithm of searching for the libraries named in the -l objects to look in the directory named by the directory pathname before looking in the usual places. Directories named in -L options shall be searched in the order specified. If the -L option is used to specify a directory that is one of the usual places searched by default, the results are unspecified. Implementations shall support at least ten instances of this option in a single c99 command invocation. If a directory specified by a -L option contains files with names starting with any of the strings "libc.", "libl.", "libpthread.", "libm.", "librt.", "libtrace.", "libxnet.", or "liby.", the results are unspecified. -l library Search the library named liblibrary.a. A library shall be searched when its name is encountered, so the placement of a -l option is significant. Several standard libraries can be specified in this manner, as described in the EXTENDED DESCRIPTION section. Implementations may recognize implementation-defined suffixes other than .a as denoting libraries. -O optlevel Specify the level of code optimization. If the optlevel option-argument is the digit '0', all special code optimizations shall be disabled. If it is the digit '1', the nature of the optimization is unspecified. If the -O option is omitted, the nature of the system's default optimization is unspecified. It is unspecified whether code generated in the presence of the -O 0 option is the same as that generated when -O is omitted. Other optlevel values may be supported. -o outfile Use the pathname outfile, instead of the default a.out, for the executable file produced. If the -o option is present with -c or -E, the result is unspecified. -s Produce object or executable files, or both, from which symbolic and other information not required for proper execution using the exec family defined in the System Interfaces volume of POSIX.12017 has been removed (stripped). If both -g and -s options are present, the action taken is unspecified. -U name Remove any initial definition of name. Multiple instances of the -D, -I, -L, -l, and -U options can be specified. OPERANDS top The application shall ensure that at least one pathname operand is specified. The following forms for pathname operands shall be supported: file.c A C-language source file to be compiled and optionally linked. The application shall ensure that the operand is of this form if the -c option is used. file.a A library of object files typically produced by the ar utility, and passed directly to the link editor. Implementations may recognize implementation-defined suffixes other than .a as denoting object file libraries. file.o An object file produced by c99 -c and passed directly to the link editor. Implementations may recognize implementation-defined suffixes other than .o as denoting object files. The processing of other files is implementation-defined. STDIN top Not used. INPUT FILES top Each input file shall be one of the following: a text file containing a C-language source program, an object file in the format produced by c99 -c, or a library of object files, in the format produced by archiving zero or more object files, using ar. Implementations may supply additional utilities that produce files in these formats. Additional input file formats are implementation-defined. ENVIRONMENT VARIABLES top The following environment variables shall affect the execution of c99: LANG Provide a default value for the internationalization variables that are unset or null. (See the Base Definitions volume of POSIX.12017, Section 8.2, Internationalization Variables for the precedence of internationalization variables used to determine the values of locale categories.) LC_ALL If set to a non-empty string value, override the values of all the other internationalization variables. LC_CTYPE Determine the locale for the interpretation of sequences of bytes of text data as characters (for example, single-byte as opposed to multi-byte characters in arguments and input files). LC_MESSAGES Determine the locale that should be used to affect the format and contents of diagnostic messages written to standard error. NLSPATH Determine the location of message catalogs for the processing of LC_MESSAGES. TMPDIR Provide a pathname that should override the default directory for temporary files, if any. On XSI- conforming systems, provide a pathname that shall override the default directory for temporary files, if any. ASYNCHRONOUS EVENTS top Default. STDOUT top If more than one pathname operand ending in .c (or possibly other unspecified suffixes) is given, for each such file: "%s:\n", <pathname> may be written. These messages, if written, shall precede the processing of each input file; they shall not be written to the standard output if they are written to the standard error, as described in the STDERR section. If the -E option is specified, the standard output shall be a text file that represents the results of the preprocessing stage of the language; it may contain extra information appropriate for subsequent compilation passes. STDERR top The standard error shall be used only for diagnostic messages. If more than one pathname operand ending in .c (or possibly other unspecified suffixes) is given, for each such file: "%s:\n", <pathname> may be written to allow identification of the diagnostic and warning messages with the appropriate input file. These messages, if written, shall precede the processing of each input file; they shall not be written to the standard error if they are written to the standard output, as described in the STDOUT section. This utility may produce warning messages about certain conditions that do not warrant returning an error (non-zero) exit value. OUTPUT FILES top Object files or executable files or both are produced in unspecified formats. If the pathname of an object file or executable file to be created by c99 resolves to an existing directory entry for a file that is not a regular file, it is unspecified whether c99 shall attempt to create the file or shall issue a diagnostic and exit with a non-zero exit status. EXTENDED DESCRIPTION top Standard Libraries The c99 utility shall recognize the following -l options for standard libraries: -l c This option shall make available all interfaces referenced in the System Interfaces volume of POSIX.12017, with the possible exception of those interfaces listed as residing in <aio.h>, <arpa/inet.h>, <complex.h>, <fenv.h>, <math.h>, <mqueue.h>, <netdb.h>, <net/if.h>, <netinet/in.h>, <pthread.h>, <sched.h>, <semaphore.h>, <spawn.h>, <sys/socket.h>, pthread_kill(), and pthread_sigmask() in <signal.h>, <trace.h>, interfaces marked as optional in <sys/mman.h>, interfaces marked as ADV (Advisory Information) in <fcntl.h>, and interfaces beginning with the prefix clock_ or timer_ in <time.h>. This option shall not be required to be present to cause a search of this library. -l l This option shall make available all interfaces required by the C-language output of lex that are not made available through the -l c option. -l pthread This option shall make available all interfaces referenced in <pthread.h> and pthread_kill() and pthread_sigmask() referenced in <signal.h>. An implementation may search this library in the absence of this option. -l m This option shall make available all interfaces referenced in <math.h>, <complex.h>, and <fenv.h>. An implementation may search this library in the absence of this option. -l rt This option shall make available all interfaces referenced in <aio.h>, <mqueue.h>, <sched.h>, <semaphore.h>, and <spawn.h>, interfaces marked as optional in <sys/mman.h>, interfaces marked as ADV (Advisory Information) in <fcntl.h>, and interfaces beginning with the prefix clock_ and timer_ in <time.h>. An implementation may search this library in the absence of this option. -l trace This option shall make available all interfaces referenced in <trace.h>. An implementation may search this library in the absence of this option. -l xnet This option shall make available all interfaces referenced in <arpa/inet.h>, <netdb.h>, <net/if.h>, <netinet/in.h>, and <sys/socket.h>. An implementation may search this library in the absence of this option. -l y This option shall make available all interfaces required by the C-language output of yacc that are not made available through the -l c option. In the absence of options that inhibit invocation of the link editor, such as -c or -E, the c99 utility shall cause the equivalent of a -l c option to be passed to the link editor after the last pathname operand or -l option, causing it to be searched after all other object files and libraries are loaded. It is unspecified whether the libraries libc.a, libl.a, libm.a, libpthread.a, librt.a, libtrace.a, libxnet.a, or liby.a exist as regular files. The implementation may accept as -l option- arguments names of objects that do not exist as regular files. External Symbols The C compiler and link editor shall support the significance of external symbols up to a length of at least 31 bytes; the action taken upon encountering symbols exceeding the implementation- defined maximum symbol length is unspecified. The compiler and link editor shall support a minimum of 511 external symbols per source or object file, and a minimum of 4095 external symbols in total. A diagnostic message shall be written to the standard output if the implementation-defined limit is exceeded; other actions are unspecified. Header Search If a file with the same name as one of the standard headers defined in the Base Definitions volume of POSIX.12017, Chapter 13, Headers, not provided as part of the implementation, is placed in any of the usual places that are searched by default for headers, the results are unspecified. Programming Environments All implementations shall support one of the following programming environments as a default. Implementations may support more than one of the following programming environments. Applications can use sysconf() or getconf to determine which programming environments are supported. Table 4-4: Programming Environments: Type Sizes Programming Environment Bits in Bits in Bits in Bits in getconf Name int long pointer off_t _POSIX_V7_ILP32_OFF32 32 32 32 32 _POSIX_V7_ILP32_OFFBIG 32 32 32 64 _POSIX_V7_LP64_OFF64 32 64 64 64 _POSIX_V7_LPBIG_OFFBIG 32 64 64 64 All implementations shall support one or more environments where the widths of the following types are no greater than the width of type long: blksize_t ptrdiff_t tcflag_t cc_t size_t wchar_t mode_t speed_t wint_t nfds_t ssize_t pid_t suseconds_t The executable files created when these environments are selected shall be in a proper format for execution by the exec family of functions. Each environment may be one of the ones in Table 4-4, Programming Environments: Type Sizes, or it may be another environment. The names for the environments that meet this requirement shall be output by a getconf command using the POSIX_V7_WIDTH_RESTRICTED_ENVS argument, as a <newline>-separated list of names suitable for use with the getconf -v option. If more than one environment meets the requirement, the names of all such environments shall be output on separate lines. Any of these names can then be used in a subsequent getconf command to obtain the flags specific to that environment with the following suffixes added as appropriate: _CFLAGS To get the C compiler flags. _LDFLAGS To get the linker/loader flags. _LIBS To get the libraries. This requirement may be removed in a future version. When this utility processes a file containing a function called main(), it shall be defined with a return type equivalent to int. Using return from the initial call to main() shall be equivalent (other than with respect to language scope issues) to calling exit() with the returned value. Reaching the end of the initial call to main() shall be equivalent to calling exit(0). The implementation shall not declare a prototype for this function. Implementations provide configuration strings for C compiler flags, linker/loader flags, and libraries for each supported environment. When an application needs to use a specific programming environment rather than the implementation default programming environment while compiling, the application shall first verify that the implementation supports the desired environment. If the desired programming environment is supported, the application shall then invoke c99 with the appropriate C compiler flags as the first options for the compile, the appropriate linker/loader flags after any other options except -l but before any operands or -l options, and the appropriate libraries at the end of the operands and -l options. Conforming applications shall not attempt to link together object files compiled for different programming models. Applications shall also be aware that binary data placed in shared memory or in files might not be recognized by applications built for other programming models. Table 4-5: Programming Environments: c99 Arguments Programming Environment c99 Arguments getconf Name Use getconf Name _POSIX_V7_ILP32_OFF32 C Compiler Flags POSIX_V7_ILP32_OFF32_CFLAGS Linker/Loader Flags POSIX_V7_ILP32_OFF32_LDFLAGS Libraries POSIX_V7_ILP32_OFF32_LIBS _POSIX_V7_ILP32_OFFBIG C Compiler Flags POSIX_V7_ILP32_OFFBIG_CFLAGS Linker/Loader Flags POSIX_V7_ILP32_OFFBIG_LDFLAGS Libraries POSIX_V7_ILP32_OFFBIG_LIBS _POSIX_V7_LP64_OFF64 C Compiler Flags POSIX_V7_LP64_OFF64_CFLAGS Linker/Loader Flags POSIX_V7_LP64_OFF64_LDFLAGS Libraries POSIX_V7_LP64_OFF64_LIBS _POSIX_V7_LPBIG_OFFBIG C Compiler Flags POSIX_V7_LPBIG_OFFBIG_CFLAGS Linker/Loader Flags POSIX_V7_LPBIG_OFFBIG_LDFLAGS Libraries POSIX_V7_LPBIG_OFFBIG_LIBS In addition to the type size programming environments above, all implementations also support a multi-threaded programming environment that is orthogonal to all of the programming environments listed above. The getconf utility can be used to get flags for the threaded programming environment, as indicated in Table 4-6, Threaded Programming Environment: c99 Arguments. Table 4-6: Threaded Programming Environment: c99 Arguments Programming Environment c99 Arguments getconf Name Use getconf Name _POSIX_THREADS C Compiler Flags POSIX_V7_THREADS_CFLAGS Linker/Loader Flags POSIX_V7_THREADS_LDFLAGS These programming environment flags may be used in conjunction with any of the type size programming environments supported by the implementation. EXIT STATUS top The following exit values shall be returned: 0 Successful compilation or link edit. >0 An error occurred. CONSEQUENCES OF ERRORS top When c99 encounters a compilation error that causes an object file not to be created, it shall write a diagnostic to standard error and continue to compile other source code operands, but it shall not perform the link phase and it shall return a non-zero exit status. If the link edit is unsuccessful, a diagnostic message shall be written to standard error and c99 exits with a non-zero status. A conforming application shall rely on the exit status of c99, rather than on the existence or mode of the executable file. The following sections are informative. APPLICATION USAGE top Since the c99 utility usually creates files in the current directory during the compilation process, it is typically necessary to run the c99 utility in a directory in which a file can be created. On systems providing POSIX Conformance (see the Base Definitions volume of POSIX.12017, Chapter 2, Conformance), c99 is required only with the C-Language Development option; XSI-conformant systems always provide c99. Some historical implementations have created .o files when -c is not specified and more than one source file is given. Since this area is left unspecified, the application cannot rely on .o files being created, but it also must be prepared for any related .o files that already exist being deleted at the completion of the link edit. There is the possible implication that if a user supplies versions of the standard functions (before they would be encountered by an implicit -l c or explicit -l m), that those versions would be used in place of the standard versions. There are various reasons this might not be true (functions defined as macros, manipulations for clean name space, and so on), so the existence of files named in the same manner as the standard libraries within the -L directories is explicitly stated to produce unspecified behavior. All of the functions specified in the System Interfaces volume of POSIX.12017 may be made visible by implementations when the Standard C Library is searched. Conforming applications must explicitly request searching the other standard libraries when functions made visible by those libraries are used. In the ISO C standard the mapping from physical source characters to the C source character set is implementation-defined. Implementations may strip white-space characters before the terminating <newline> of a (physical) line as part of this mapping and, as a consequence of this, one or more white-space characters (and no other characters) between a <backslash> character and the <newline> character that terminates the line produces implementation-defined results. Portable applications should not use such constructs. Some c99 compilers not conforming to POSIX.12008 do not support trigraphs by default. EXAMPLES top 1. The following usage example compiles foo.c and creates the executable file foo: c99 -o foo foo.c The following usage example compiles foo.c and creates the object file foo.o: c99 -c foo.c The following usage example compiles foo.c and creates the executable file a.out: c99 foo.c The following usage example compiles foo.c, links it with bar.o, and creates the executable file a.out. It may also create and leave foo.o: c99 foo.c bar.o 2. The following example shows how an application using threads interfaces can test for support of and use a programming environment supporting 32-bit int, long, and pointer types and an off_t type using at least 64 bits: offbig_env=$(getconf _POSIX_V7_ILP32_OFFBIG) if [ $offbig_env != "-1" ] && [ $offbig_env != "undefined" ] then c99 $(getconf POSIX_V7_ILP32_OFFBIG_CFLAGS) \ $(getconf POSIX_V7_THREADS_CFLAGS) -D_XOPEN_SOURCE=700 \ $(getconf POSIX_V7_ILP32_OFFBIG_LDFLAGS) \ $(getconf POSIX_V7_THREADS_LDFLAGS) foo.c -o foo \ $(getconf POSIX_V7_ILP32_OFFBIG_LIBS) \ -l pthread else echo ILP32_OFFBIG programming environment not supported exit 1 fi 3. The following examples clarify the use and interactions of -L and -l options. Consider the case in which module a.c calls function f() in library libQ.a, and module b.c calls function g() in library libp.a. Assume that both libraries reside in /a/b/c. The command line to compile and link in the desired way is: c99 -L /a/b/c main.o a.c -l Q b.c -l p In this case the -L option need only precede the first -l option, since both libQ.a and libp.a reside in the same directory. Multiple -L options can be used when library name collisions occur. Building on the previous example, suppose that the user wants to use a new libp.a, in /a/a/a, but still wants f() from /a/b/c/libQ.a: c99 -L /a/a/a -L /a/b/c main.o a.c -l Q b.c -l p In this example, the linker searches the -L options in the order specified, and finds /a/a/a/libp.a before /a/b/c/libp.a when resolving references for b.c. The order of the -l options is still important, however. 4. The following example shows how an application can use a programming environment where the widths of the following types: blksize_t, cc_t, mode_t, nfds_t, pid_t, ptrdiff_t, size_t, speed_t, ssize_t, suseconds_t, tcflag_t, wchar_t, wint_t are no greater than the width of type long: # First choose one of the listed environments ... # ... if there are no additional constraints, the first one will do: CENV=$(getconf POSIX_V7_WIDTH_RESTRICTED_ENVS | head -n l) # ... or, if an environment that supports large files is preferred, # look for names that contain "OFF64" or "OFFBIG". (This chooses # the last one in the list if none match.) for CENV in $(getconf POSIX_V7_WIDTH_RESTRICTED_ENVS) do case $CENV in *OFF64*|*OFFBIG*) break ;; esac done # The chosen environment name can now be used like this: c99 $(getconf ${CENV}_CFLAGS) -D _POSIX_C_SOURCE=200809L \ $(getconf ${CENV}_LDFLAGS) foo.c -o foo \ $(getconf ${CENV}_LIBS) RATIONALE top The c99 utility is based on the c89 utility originally introduced in the ISO POSIX2:1993 standard. Some of the changes from c89 include the ability to intersperse options and operands (which many c89 implementations allowed despite it not being specified), the description of -l as an option instead of an operand, and the modification to the contents of the Standard Libraries section to account for new headers and options; for example, <spawn.h> added to the description of -l rt, and -l trace added for the Tracing option. POSIX.12008 specifies that the c99 utility must be able to use regular files for *.o files and for a.out files. Implementations are free to overwrite existing files of other types when attempting to create object files and executable files, but are not required to do so. If something other than a regular file is specified and using it fails for any reason, c99 is required to issue a diagnostic message and exit with a non-zero exit status. But for some file types, the problem may not be noticed for a long time. For example, if a FIFO named a.out exists in the current directory, c99 may attempt to open a.out and will hang in the open() call until another process opens the FIFO for reading. Then c99 may write most of the a.out to the FIFO and fail when it tries to seek back close to the start of the file to insert a timestamp (FIFOs are not seekable files). The c99 utility is also allowed to issue a diagnostic immediately if it encounters an a.out or *.o file that is not a regular file. For portable use, applications should ensure that any a.out, -o option-argument, or *.o files corresponding to any *.c files do not conflict with names already in use that are not regular files or symbolic links that point to regular files. On many systems, multi-threaded applications run in a programming environment that is distinct from that used by single-threaded applications. This multi-threaded programming environment (in addition to needing to specify -l pthread at link time) may require additional flags to be set when headers are processed at compile time (-D_REENTRANT being common). This programming environment is orthogonal to the type size programming environments discussed above and listed in Table 4-4, Programming Environments: Type Sizes. This version of the standard adds getconf utility calls to provide the C compiler flags and linker/loader flags needed to support multi-threaded applications. Note that on a system where single-threaded applications are a special case of a multi-threaded application, both of these getconf calls may return NULL strings; on other implementations both of these strings may be non-NULL strings. The C standardization committee invented trigraphs (e.g., "??!" to represent '|') to address character portability problems in development environments based on national variants of the 7-bit ISO/IEC 646:1991 standard character set. However, these environments were already obsolete by the time the first ISO C standard was published, and in practice trigraphs have not been used for their intended purpose, and usually are intended to have their original meaning in K&R C. For example, in practice a C- language source string like "What??!" is usually intended to end in two <question-mark> characters and an <exclamation-mark>, not in '|'. When the -E option is used, execution of some #pragma preprocessor directives may simply result in a copy of the directive being included in the output as part of the allowed extra information used by subsequent compilation passes (see STDOUT). FUTURE DIRECTIONS top Unlike all of the other non-OB-shaded utilities in this standard, a utility by this name probably will not appear in the next version of this standard. This utility's name is tied to the current revision of the ISO C standard at the time this standard is approved. Since the ISO C standard and this standard are maintained by different organizations on different schedules, we cannot predict what the compiler will be named in the next version of the standard. SEE ALSO top Section 1.1.1.4, File Read, Write, and Creation, ar(1p), getconf(1p), make(1p), nm(1p), strip(1p), umask(1p) The Base Definitions volume of POSIX.12017, Chapter 8, Environment Variables, Section 12.2, Utility Syntax Guidelines, Chapter 13, Headers The System Interfaces volume of POSIX.12017, exec(1p), sysconf(3p) COPYRIGHT top Portions of this text are reprinted and reproduced in electronic form from IEEE Std 1003.1-2017, Standard for Information Technology -- Portable Operating System Interface (POSIX), The Open Group Base Specifications Issue 7, 2018 Edition, Copyright (C) 2018 by the Institute of Electrical and Electronics Engineers, Inc and The Open Group. In the event of any discrepancy between this version and the original IEEE and The Open Group Standard, the original IEEE and The Open Group Standard is the referee document. The original Standard can be obtained online at http://www.opengroup.org/unix/online.html . Any typographical or formatting errors that appear in this page are most likely to have been introduced during the conversion of the source files to man page format. To report such errors, see https://www.kernel.org/doc/man-pages/reporting_bugs.html . IEEE/The Open Group 2017 C99(1P) Pages that refer to this page: ar(1p), cflow(1p), ctags(1p), cxref(1p), fort77(1p), getconf(1p), lex(1p), m4(1p), make(1p), nm(1p), od(1p), strip(1p), yacc(1p), confstr(3p) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # c99\n\n> Compiles C programs according to the ISO C standard.\n> More information: <https://manned.org/c99>.\n\n- Compile source file(s) and create an executable:\n\n`c99 {{file.c}}`\n\n- Compile source file(s) and specify the executable [o]utput filename:\n\n`c99 -o {{executable_name}} {{file.c}}`\n\n- Compile source file(s) and create object file(s):\n\n`c99 -c {{file.c}}`\n\n- Compile source file(s), link with object file(s), and create an executable:\n\n`c99 {{file.c}} {{file.o}}`\n |
cal | cal(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training cal(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | PARAMETERS | NOTES | COLORS | HISTORY | BUGS | SEE ALSO | REPORTING BUGS | AVAILABILITY CAL(1) User Commands CAL(1) NAME top cal - display a calendar SYNOPSIS top cal [options] [[[day] month] year] cal [options] [timestamp|monthname] DESCRIPTION top cal displays a simple calendar. If no arguments are specified, the current month is displayed. The month may be specified as a number (1-12), as a month name or as an abbreviated month name according to the current locales. Two different calendar systems are used, Gregorian and Julian. These are nearly identical systems with Gregorian making a small adjustment to the frequency of leap years; this facilitates improved synchronization with solar events like the equinoxes. The Gregorian calendar reform was introduced in 1582, but its adoption continued up to 1923. By default cal uses the adoption date of 3 Sept 1752. From that date forward the Gregorian calendar is displayed; previous dates use the Julian calendar system. 11 days were removed at the time of adoption to bring the calendar in sync with solar events. So Sept 1752 has a mix of Julian and Gregorian dates by which the 2nd is followed by the 14th (the 3rd through the 13th are absent). Optionally, either the proleptic Gregorian calendar or the Julian calendar may be used exclusively. See --reform below. OPTIONS top -1, --one Display single month output. (This is the default.) -3, --three Display three months spanning the date. -n , --months number Display number of months, starting from the month containing the date. -S, --span Display months spanning the date. -s, --sunday Display Sunday as the first day of the week. -m, --monday Display Monday as the first day of the week. -v, --vertical Display using a vertical layout (aka ncal(1) mode). --iso Display the proleptic Gregorian calendar exclusively. This option does not affect week numbers and the first day of the week. See --reform below. -j, --julian Use day-of-year numbering for all calendars. These are also called ordinal days. Ordinal days range from 1 to 366. This option does not switch from the Gregorian to the Julian calendar system, that is controlled by the --reform option. Sometimes Gregorian calendars using ordinal dates are referred to as Julian calendars. This can be confusing due to the many date related conventions that use Julian in their name: (ordinal) julian date, julian (calendar) date, (astronomical) julian date, (modified) julian date, and more. This option is named julian, because ordinal days are identified as julian by the POSIX standard. However, be aware that cal also uses the Julian calendar system. See DESCRIPTION above. --reform val This option sets the adoption date of the Gregorian calendar reform. Calendar dates previous to reform use the Julian calendar system. Calendar dates after reform use the Gregorian calendar system. The argument val can be: 1752 - sets 3 September 1752 as the reform date (default). This is when the Gregorian calendar reform was adopted by the British Empire. gregorian - display Gregorian calendars exclusively. This special placeholder sets the reform date below the smallest year that cal can use; meaning all calendar output uses the Gregorian calendar system. This is called the proleptic Gregorian calendar, because dates prior to the calendar systems creation use extrapolated values. iso - alias of gregorian. The ISO 8601 standard for the representation of dates and times in information interchange requires using the proleptic Gregorian calendar. julian - display Julian calendars exclusively. This special placeholder sets the reform date above the largest year that cal can use; meaning all calendar output uses the Julian calendar system. See DESCRIPTION above. -y, --year Display a calendar for the whole year. -Y, --twelve Display a calendar for the next twelve months. -w, --week[=number] Display week numbers in the calendar (US or ISO-8601). See the NOTES section for more details. --color[=when] Colorize the output. The optional argument when can be auto, never or always. If the when argument is omitted, it defaults to auto. The colors can be disabled; for the current built-in default see the --help output. See also the COLORS section. -c, --columns=columns Number of columns to use. auto uses as many as fit the terminal. -h, --help Display help text and exit. -V, --version Print version and exit. PARAMETERS top Single digits-only parameter (e.g., 'cal 2020') Specifies the year to be displayed; note the year must be fully specified: cal 89 will not display a calendar for 1989. Single string parameter (e.g., 'cal tomorrow' or 'cal August') Specifies timestamp or a month name (or abbreviated name) according to the current locales. The special placeholders are accepted when parsing timestamp, "now" may be used to refer to the current time, "today", "yesterday", "tomorrow" refer to of the current day, the day before or the next day, respectively. The relative date specifications are also accepted, in this case "+" is evaluated to the current time plus the specified time span. Correspondingly, a time span that is prefixed with "-" is evaluated to the current time minus the specified time span, for example '+2days'. Instead of prefixing the time span with "+" or "-", it may also be suffixed with a space and the word "left" or "ago" (for example '1 week ago'). Two parameters (e.g., 'cal 11 2020') Denote the month (1 - 12) and year. Three parameters (e.g., 'cal 25 11 2020') Denote the day (1-31), month and year, and the day will be highlighted if the calendar is displayed on a terminal. If no parameters are specified, the current months calendar is displayed. NOTES top A year starts on January 1. The first day of the week is determined by the locale or the --sunday and --monday options. The week numbering depends on the choice of the first day of the week. If it is Sunday then the customary North American numbering is used, where 1 January is in week number 1. If it is Monday (-m) then the ISO 8601 standard week numbering is used, where the first Thursday is in week number 1. COLORS top The output colorization is implemented by terminal-colors.d(5) functionality. Implicit coloring can be disabled by an empty file /etc/terminal-colors.d/cal.disable for the cal command or for all tools by /etc/terminal-colors.d/disable The user-specific $XDG_CONFIG_HOME/terminal-colors.d or $HOME/.config/terminal-colors.d overrides the global setting. Note that the output colorization may be enabled by default, and in this case terminal-colors.d directories do not have to exist yet. The logical color names supported by cal are: today The current day. weeknumber The number of the week. header The header of a month. workday Days that fall within the work-week. weekend Days that fall outside the work-week. For example: echo -e 'weekend 35\ntoday 1;41\nheader yellow' > $HOME/.config/terminal-colors.d/cal.scheme HISTORY top A cal command appeared in Version 6 AT&T UNIX. BUGS top The default cal output uses 3 September 1752 as the Gregorian calendar reform date. The historical reform dates for the other locales, including its introduction in October 1582, are not implemented. Alternative calendars, such as the Umm al-Qura, the Solar Hijri, the Geez, or the lunisolar Hindu, are not supported. SEE ALSO top terminal-colors.d(5) REPORTING BUGS top For bug reports, use the issue tracker at https://github.com/util-linux/util-linux/issues. AVAILABILITY top The cal command is part of the util-linux package which can be downloaded from Linux Kernel Archive <https://www.kernel.org/pub/linux/utils/util-linux/>. This page is part of the util-linux (a random collection of Linux utilities) project. Information about the project can be found at https://www.kernel.org/pub/linux/utils/util-linux/. If you have a bug report for this manual page, send it to util-linux@vger.kernel.org. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/utils/util-linux/util-linux.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-14.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org util-linux 2.39.594-1e0ad 2023-07-19 CAL(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # cal\n\n> Display a calendar with the current day highlighted.\n> More information: <https://manned.org/cal>.\n\n- Display a calendar for the current month:\n\n`cal`\n\n- Display [3] months spanning the date:\n\n`cal -3`\n\n- Display the whole calendar for the current [y]ear:\n\n`cal --year`\n\n- Display the next twelve months:\n\n`cal --twelve`\n\n- Use Monday as the first day of the week:\n\n`cal --monday`\n\n- Display a calendar for a specific year (4 digits):\n\n`cal {{year}}`\n\n- Display a calendar for a specific month and year:\n\n`cal {{month}} {{year}}`\n |
cancel | cancel(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training cancel(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | CONFORMING TO | EXAMPLES | NOTES | SEE ALSO | COPYRIGHT | COLOPHON cancel(1) Apple Inc. cancel(1) NAME top cancel - cancel jobs SYNOPSIS top cancel [ -E ] [ -U username ] [ -a ] [ -h hostname[:port] ] [ -u username ] [ -x ] [ id ] [ destination ] [ destination-id ] DESCRIPTION top The cancel command cancels print jobs. If no destination or id is specified, the currently printing job on the default destination is canceled. OPTIONS top The following options are recognized by cancel: -a Cancel all jobs on the named destination, or all jobs on all destinations if none is provided. -E Forces encryption when connecting to the server. -h hostname[:port] Specifies an alternate server. -U username Specifies the username to use when connecting to the server. -u username Cancels jobs owned by username. -x Deletes job data files in addition to canceling. CONFORMING TO top Unlike the System V printing system, CUPS allows printer names to contain any printable character except SPACE, TAB, "/", or "#". Also, printer and class names are not case-sensitive. EXAMPLES top Cancel the current print job: cancel Cancel job "myprinter-42": cancel myprinter-42 Cancel all jobs: cancel -a NOTES top Administrators wishing to prevent unauthorized cancellation of jobs via the -u option should require authentication for Cancel- Jobs operations in cupsd.conf(5). SEE ALSO top cupsd.conf(5), lp(1), lpmove(8), lpstat(1), CUPS Online Help (http://localhost:631/help) COPYRIGHT top Copyright 2007-2019 by Apple Inc. COLOPHON top This page is part of the CUPS (a standards-based, open source printing system) project. Information about the project can be found at http://www.cups.org/. If you have a bug report for this manual page, see http://www.cups.org/. This page was obtained from the project's upstream Git repository https://github.com/apple/cups on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-10-27.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org 26 April 2019 CUPS cancel(1) Pages that refer to this page: cups(1), lp(1), lpoptions(1), lpq(1), lpr(1), lprm(1), lpstat(1), cupsaccept(8), cupsenable(8), lpc(8), lpmove(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # cancel\n\n> Cancel print jobs.\n> See also: `lp`, `lpmove`, `lpstat`.\n> More information: <https://openprinting.github.io/cups/doc/man-cancel.html>.\n\n- Cancel the current job of the default printer (set with `lpoptions -d {{printer}}`):\n\n`cancel`\n\n- Cancel the jobs of the default printer owned by a specific [u]ser:\n\n`cancel -u {{username}}`\n\n- Cancel the current job of a specific printer:\n\n`cancel {{printer}}`\n\n- Cancel a specific job from a specific printer:\n\n`cancel {{printer}}-{{job_id}}`\n\n- Cancel [a]ll jobs of all printers:\n\n`cancel -a`\n\n- Cancel [a]ll jobs of a specific printer:\n\n`cancel -a {{printer}}`\n\n- Cancel the current job of a specific server and then delete ([x]) job data files:\n\n`cancel -h {{server}} -x`\n |
cat | cat(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training cat(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | EXAMPLES | AUTHOR | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON CAT(1) User Commands CAT(1) NAME top cat - concatenate files and print on the standard output SYNOPSIS top cat [OPTION]... [FILE]... DESCRIPTION top Concatenate FILE(s) to standard output. With no FILE, or when FILE is -, read standard input. -A, --show-all equivalent to -vET -b, --number-nonblank number nonempty output lines, overrides -n -e equivalent to -vE -E, --show-ends display $ at end of each line -n, --number number all output lines -s, --squeeze-blank suppress repeated empty output lines -t equivalent to -vT -T, --show-tabs display TAB characters as ^I -u (ignored) -v, --show-nonprinting use ^ and M- notation, except for LFD and TAB --help display this help and exit --version output version information and exit EXAMPLES top cat f - g Output f's contents, then standard input, then g's contents. cat Copy standard input to standard output. AUTHOR top Written by Torbjorn Granlund and Richard M. Stallman. REPORTING BUGS top GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT top Copyright 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top tac(1) Full documentation <https://www.gnu.org/software/coreutils/cat> or available locally via: info '(coreutils) cat invocation' COLOPHON top This page is part of the coreutils (basic file, shell and text manipulation utilities) project. Information about the project can be found at http://www.gnu.org/software/coreutils/. If you have a bug report for this manual page, see http://www.gnu.org/software/coreutils/. This page was obtained from the tarball coreutils-9.4.tar.xz fetched from http://ftp.gnu.org/gnu/coreutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU coreutils 9.4 August 2023 CAT(1) Pages that refer to this page: pmlogrewrite(1), pv(1), systemd-socket-activate(1), tac(1), ul(1), proc(5), cpuset(7), time_namespaces(7), readprofile(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # cat\n\n> Print and concatenate files.\n> More information: <https://www.gnu.org/software/coreutils/cat>.\n\n- Print the contents of a file to `stdout`:\n\n`cat {{path/to/file}}`\n\n- Concatenate several files into an output file:\n\n`cat {{path/to/file1 path/to/file2 ...}} > {{path/to/output_file}}`\n\n- Append several files to an output file:\n\n`cat {{path/to/file1 path/to/file2 ...}} >> {{path/to/output_file}}`\n\n- Write `stdin` to a file:\n\n`cat - > {{path/to/file}}`\n\n- [n]umber all output lines:\n\n`cat -n {{path/to/file}}`\n\n- Display non-printable and whitespace characters (with `M-` prefix if non-ASCII):\n\n`cat -v -t -e {{path/to/file}}`\n |
cd | cd(1p) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training cd(1p) Linux manual page PROLOG | NAME | SYNOPSIS | DESCRIPTION | OPTIONS | OPERANDS | STDIN | INPUT FILES | ENVIRONMENT VARIABLES | ASYNCHRONOUS EVENTS | STDOUT | STDERR | OUTPUT FILES | EXTENDED DESCRIPTION | EXIT STATUS | CONSEQUENCES OF ERRORS | APPLICATION USAGE | EXAMPLES | RATIONALE | FUTURE DIRECTIONS | SEE ALSO | COPYRIGHT CD(1P) POSIX Programmer's Manual CD(1P) PROLOG top This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux. NAME top cd change the working directory SYNOPSIS top cd [-L|-P] [directory] cd - DESCRIPTION top The cd utility shall change the working directory of the current shell execution environment (see Section 2.12, Shell Execution Environment) by executing the following steps in sequence. (In the following steps, the symbol curpath represents an intermediate value used to simplify the description of the algorithm used by cd. There is no requirement that curpath be made visible to the application.) 1. If no directory operand is given and the HOME environment variable is empty or undefined, the default behavior is implementation-defined and no further steps shall be taken. 2. If no directory operand is given and the HOME environment variable is set to a non-empty value, the cd utility shall behave as if the directory named in the HOME environment variable was specified as the directory operand. 3. If the directory operand begins with a <slash> character, set curpath to the operand and proceed to step 7. 4. If the first component of the directory operand is dot or dot-dot, proceed to step 6. 5. Starting with the first pathname in the <colon>-separated pathnames of CDPATH (see the ENVIRONMENT VARIABLES section) if the pathname is non-null, test if the concatenation of that pathname, a <slash> character if that pathname did not end with a <slash> character, and the directory operand names a directory. If the pathname is null, test if the concatenation of dot, a <slash> character, and the operand names a directory. In either case, if the resulting string names an existing directory, set curpath to that string and proceed to step 7. Otherwise, repeat this step with the next pathname in CDPATH until all pathnames have been tested. 6. Set curpath to the directory operand. 7. If the -P option is in effect, proceed to step 10. If curpath does not begin with a <slash> character, set curpath to the string formed by the concatenation of the value of PWD, a <slash> character if the value of PWD did not end with a <slash> character, and curpath. 8. The curpath value shall then be converted to canonical form as follows, considering each component from beginning to end, in sequence: a. Dot components and any <slash> characters that separate them from the next component shall be deleted. b. For each dot-dot component, if there is a preceding component and it is neither root nor dot-dot, then: i. If the preceding component does not refer (in the context of pathname resolution with symbolic links followed) to a directory, then the cd utility shall display an appropriate error message and no further steps shall be taken. ii. The preceding component, all <slash> characters separating the preceding component from dot-dot, dot-dot, and all <slash> characters separating dot- dot from the following component (if any) shall be deleted. c. An implementation may further simplify curpath by removing any trailing <slash> characters that are not also leading <slash> characters, replacing multiple non- leading consecutive <slash> characters with a single <slash>, and replacing three or more leading <slash> characters with a single <slash>. If, as a result of this canonicalization, the curpath variable is null, no further steps shall be taken. 9. If curpath is longer than {PATH_MAX} bytes (including the terminating null) and the directory operand was not longer than {PATH_MAX} bytes (including the terminating null), then curpath shall be converted from an absolute pathname to an equivalent relative pathname if possible. This conversion shall always be considered possible if the value of PWD, with a trailing <slash> added if it does not already have one, is an initial substring of curpath. Whether or not it is considered possible under other circumstances is unspecified. Implementations may also apply this conversion if curpath is not longer than {PATH_MAX} bytes or the directory operand was longer than {PATH_MAX} bytes. 10. The cd utility shall then perform actions equivalent to the chdir() function called with curpath as the path argument. If these actions fail for any reason, the cd utility shall display an appropriate error message and the remainder of this step shall not be executed. If the -P option is not in effect, the PWD environment variable shall be set to the value that curpath had on entry to step 9 (i.e., before conversion to a relative pathname). If the -P option is in effect, the PWD environment variable shall be set to the string that would be output by pwd -P. If there is insufficient permission on the new directory, or on any parent of that directory, to determine the current working directory, the value of the PWD environment variable is unspecified. If, during the execution of the above steps, the PWD environment variable is set, the OLDPWD environment variable shall also be set to the value of the old working directory (that is the current working directory immediately prior to the call to cd). OPTIONS top The cd utility shall conform to the Base Definitions volume of POSIX.12017, Section 12.2, Utility Syntax Guidelines. The following options shall be supported by the implementation: -L Handle the operand dot-dot logically; symbolic link components shall not be resolved before dot-dot components are processed (see steps 8. and 9. in the DESCRIPTION). -P Handle the operand dot-dot physically; symbolic link components shall be resolved before dot-dot components are processed (see step 7. in the DESCRIPTION). If both -L and -P options are specified, the last of these options shall be used and all others ignored. If neither -L nor -P is specified, the operand shall be handled dot-dot logically; see the DESCRIPTION. OPERANDS top The following operands shall be supported: directory An absolute or relative pathname of the directory that shall become the new working directory. The interpretation of a relative pathname by cd depends on the -L option and the CDPATH and PWD environment variables. If directory is an empty string, the results are unspecified. - When a <hyphen-minus> is used as the operand, this shall be equivalent to the command: cd "$OLDPWD" && pwd which changes to the previous working directory and then writes its name. STDIN top Not used. INPUT FILES top None. ENVIRONMENT VARIABLES top The following environment variables shall affect the execution of cd: CDPATH A <colon>-separated list of pathnames that refer to directories. The cd utility shall use this list in its attempt to change the directory, as described in the DESCRIPTION. An empty string in place of a directory pathname represents the current directory. If CDPATH is not set, it shall be treated as if it were an empty string. HOME The name of the directory, used when no directory operand is specified. LANG Provide a default value for the internationalization variables that are unset or null. (See the Base Definitions volume of POSIX.12017, Section 8.2, Internationalization Variables for the precedence of internationalization variables used to determine the values of locale categories.) LC_ALL If set to a non-empty string value, override the values of all the other internationalization variables. LC_CTYPE Determine the locale for the interpretation of sequences of bytes of text data as characters (for example, single-byte as opposed to multi-byte characters in arguments). LC_MESSAGES Determine the locale that should be used to affect the format and contents of diagnostic messages written to standard error. NLSPATH Determine the location of message catalogs for the processing of LC_MESSAGES. OLDPWD A pathname of the previous working directory, used by cd -. PWD This variable shall be set as specified in the DESCRIPTION. If an application sets or unsets the value of PWD, the behavior of cd is unspecified. ASYNCHRONOUS EVENTS top Default. STDOUT top If a non-empty directory name from CDPATH is used, or if cd - is used, an absolute pathname of the new working directory shall be written to the standard output as follows: "%s\n", <new directory> Otherwise, there shall be no output. STDERR top The standard error shall be used only for diagnostic messages. OUTPUT FILES top None. EXTENDED DESCRIPTION top None. EXIT STATUS top The following exit values shall be returned: 0 The directory was successfully changed. >0 An error occurred. CONSEQUENCES OF ERRORS top The working directory shall remain unchanged. The following sections are informative. APPLICATION USAGE top Since cd affects the current shell execution environment, it is always provided as a shell regular built-in. If it is called in a subshell or separate utility execution environment, such as one of the following: (cd /tmp) nohup cd find . -exec cd {} \; it does not affect the working directory of the caller's environment. The user must have execute (search) permission in directory in order to change to it. EXAMPLES top The following template can be used to perform processing in the directory specified by location and end up in the current working directory in use before the first cd command was issued: cd location if [ $? -ne 0 ] then print error message exit 1 fi ... do whatever is desired as long as the OLDPWD environment variable is not modified cd - RATIONALE top The use of the CDPATH was introduced in the System V shell. Its use is analogous to the use of the PATH variable in the shell. The BSD C shell used a shell parameter cdpath for this purpose. A common extension when HOME is undefined is to get the login directory from the user database for the invoking user. This does not occur on System V implementations. Some historical shells, such as the KornShell, took special actions when the directory name contained a dot-dot component, selecting the logical parent of the directory, rather than the actual parent directory; that is, it moved up one level toward the '/' in the pathname, remembering what the user typed, rather than performing the equivalent of: chdir(".."); In such a shell, the following commands would not necessarily produce equivalent output for all directories: cd .. && ls ls .. This behavior is now the default. It is not consistent with the definition of dot-dot in most historical practice; that is, while this behavior has been optionally available in the KornShell, other shells have historically not supported this functionality. The logical pathname is stored in the PWD environment variable when the cd utility completes and this value is used to construct the next directory name if cd is invoked with the -L option. FUTURE DIRECTIONS top None. SEE ALSO top Section 2.12, Shell Execution Environment, pwd(1p) The Base Definitions volume of POSIX.12017, Chapter 8, Environment Variables, Section 12.2, Utility Syntax Guidelines The System Interfaces volume of POSIX.12017, chdir(3p) COPYRIGHT top Portions of this text are reprinted and reproduced in electronic form from IEEE Std 1003.1-2017, Standard for Information Technology -- Portable Operating System Interface (POSIX), The Open Group Base Specifications Issue 7, 2018 Edition, Copyright (C) 2018 by the Institute of Electrical and Electronics Engineers, Inc and The Open Group. In the event of any discrepancy between this version and the original IEEE and The Open Group Standard, the original IEEE and The Open Group Standard is the referee document. The original Standard can be obtained online at http://www.opengroup.org/unix/online.html . Any typographical or formatting errors that appear in this page are most likely to have been introduced during the conversion of the source files to man page format. To report such errors, see https://www.kernel.org/doc/man-pages/reporting_bugs.html . IEEE/The Open Group 2017 CD(1P) Pages that refer to this page: pwd(1p), sh(1p) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # cd\n\n> Change the current working directory.\n> More information: <https://manned.org/cd>.\n\n- Go to the specified directory:\n\n`cd {{path/to/directory}}`\n\n- Go up to the parent of the current directory:\n\n`cd ..`\n\n- Go to the home directory of the current user:\n\n`cd`\n\n- Go to the home directory of the specified user:\n\n`cd ~{{username}}`\n\n- Go to the previously chosen directory:\n\n`cd -`\n\n- Go to the root directory:\n\n`cd /`\n |
cfdisk | cfdisk(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training cfdisk(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | COMMANDS | COLORS | ENVIRONMENT | AUTHORS | SEE ALSO | REPORTING BUGS | AVAILABILITY CFDISK(8) System Administration CFDISK(8) NAME top cfdisk - display or manipulate a disk partition table SYNOPSIS top cfdisk [options] [device] DESCRIPTION top cfdisk is a curses-based program for partitioning any block device. The default device is /dev/sda. Note that cfdisk provides basic partitioning functionality with a user-friendly interface. If you need advanced features, use fdisk(8) instead. All disk label changes will remain in memory only, and the disk will be unmodified until you decide to write your changes. Be careful before using the write command. Since version 2.25 cfdisk supports MBR (DOS), GPT, SUN and SGI disk labels, but no longer provides any functionality for CHS (Cylinder-Head-Sector) addressing. CHS has never been important for Linux, and this addressing concept does not make any sense for new devices. Since version 2.25 cfdisk also does not provide a 'print' command any more. This functionality is provided by the utilities partx(8) and lsblk(8) in a very comfortable and rich way. If you want to remove an old partition table from a device, use wipefs(8). OPTIONS top -h, --help Display help text and exit. -V, --version Print version and exit. -L, --color[=when] Colorize the output. The optional argument when can be auto, never or always. If the when argument is omitted, it defaults to auto. The colors can be disabled, for the current built-in default see --help output. See also the COLORS section. --lock[=mode] Use exclusive BSD lock for device or file it operates. The optional argument mode can be yes, no (or 1 and 0) or nonblock. If the mode argument is omitted, it defaults to yes. This option overwrites environment variable $LOCK_BLOCK_DEVICE. The default is not to use any lock at all, but its recommended to avoid collisions with systemd-udevd(8) or other tools. -r, --read-only Forced open in read-only mode. -z, --zero Start with an in-memory zeroed partition table. This option does not zero the partition table on the disk; rather, it simply starts the program without reading the existing partition table. This option allows you to create a new partition table from scratch or from an sfdisk(8)-compatible script. COMMANDS top The commands for cfdisk can be entered by pressing the corresponding key (pressing Enter after the command is not necessary). Here is a list of the available commands: b Toggle the bootable flag of the current partition. This allows you to select which primary partition is bootable on the drive. This command may not be available for all partition label types. d Delete the current partition. This will convert the current partition into free space and merge it with any free space immediately surrounding the current partition. A partition already marked as free space or marked as unusable cannot be deleted. h Show the help screen. n Create a new partition from free space. cfdisk then prompts you for the size of the partition you want to create. The default size is equal to the entire available free space at the current position. The size may be followed by a multiplicative suffix: KiB (=1024), MiB (=1024*1024), and so on for GiB, TiB, PiB, EiB, ZiB and YiB (the "iB" is optional, e.g., "K" has the same meaning as "KiB"). q Quit the program. This will exit the program without writing any data to the disk. r Reduce or enlarge the current partition. cfdisk then prompts you for the new size of the partition. The default size is the current size. A partition marked as free space or marked as unusable cannot be resized. Note that reducing the size of a partition might destroy data on that partition. s Sort the partitions in ascending start-sector order. When deleting and adding partitions, it is likely that the numbering of the partitions will no longer match their order on the disk. This command restores that match. t Change the partition type. By default, new partitions are created as Linux partitions. u Dump the current in-memory partition table to an sfdisk(8)-compatible script file. The script files are compatible between cfdisk, fdisk(8) sfdisk(8) and other libfdisk applications. For more details see sfdisk(8). It is also possible to load an sfdisk-script into cfdisk if there is no partition table on the device or when you start cfdisk with the --zero command-line option. W Write the partition table to disk (you must enter an uppercase W). Since this might destroy data on the disk, you must either confirm or deny the write by entering `yes' or `no'. If you enter `yes', cfdisk will write the partition table to disk and then tell the kernel to re-read the partition table from the disk. The re-reading of the partition table does not always work. In such a case you need to inform the kernel about any new partitions by using partprobe(8) or partx(8), or by rebooting the system. x Toggle extra information about a partition. Up Arrow, Down Arrow Move the cursor to the previous or next partition. If there are more partitions than can be displayed on a screen, you can display the next (previous) set of partitions by moving down (up) at the last (first) partition displayed on the screen. Left Arrow, Right Arrow Select the preceding or the next menu item. Hitting Enter will execute the currently selected item. All commands can be entered with either uppercase or lowercase letters (except for Write). When in a submenu or at a prompt, you can hit the Esc key to return to the main menu. COLORS top The output colorization is implemented by terminal-colors.d(5) functionality. Implicit coloring can be disabled by an empty file /etc/terminal-colors.d/cfdisk.disable for the cfdisk command or for all tools by /etc/terminal-colors.d/disable The user-specific $XDG_CONFIG_HOME/terminal-colors.d or $HOME/.config/terminal-colors.d overrides the global setting. Note that the output colorization may be enabled by default, and in this case terminal-colors.d directories do not have to exist yet. cfdisk does not support color customization with a color-scheme file. ENVIRONMENT top CFDISK_DEBUG=all enables cfdisk debug output. LIBFDISK_DEBUG=all enables libfdisk debug output. LIBBLKID_DEBUG=all enables libblkid debug output. LIBSMARTCOLS_DEBUG=all enables libsmartcols debug output. LIBSMARTCOLS_DEBUG_PADDING=on use visible padding characters. Requires enabled LIBSMARTCOLS_DEBUG. LOCK_BLOCK_DEVICE=<mode> use exclusive BSD lock. The mode is "1" or "0". See --lock for more details. AUTHORS top Karel Zak <kzak@redhat.com> The current cfdisk implementation is based on the original cfdisk from Kevin E. Martin <martin@cs.unc.edu>. SEE ALSO top fdisk(8), parted(8), partprobe(8), partx(8), sfdisk(8) REPORTING BUGS top For bug reports, use the issue tracker at https://github.com/util-linux/util-linux/issues. AVAILABILITY top The cfdisk command is part of the util-linux package which can be downloaded from Linux Kernel Archive <https://www.kernel.org/pub/linux/utils/util-linux/>. This page is part of the util-linux (a random collection of Linux utilities) project. Information about the project can be found at https://www.kernel.org/pub/linux/utils/util-linux/. If you have a bug report for this manual page, send it to util-linux@vger.kernel.org. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/utils/util-linux/util-linux.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-14.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org util-linux 2.39.594-1e0ad 2023-07-19 CFDISK(8) Pages that refer to this page: fdisk(8), sfdisk(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # cfdisk\n\n> Manage partition tables and partitions on a hard disk using a curses UI.\n> More information: <https://manned.org/cfdisk>.\n\n- Start the partition manipulator with a specific device:\n\n`cfdisk {{/dev/sdX}}`\n\n- Create a new partition table for a specific device and manage it:\n\n`cfdisk --zero {{/dev/sdX}}`\n |
chage | chage(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training chage(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | NOTE | CONFIGURATION | FILES | EXIT VALUES | SEE ALSO | COLOPHON CHAGE(1) User Commands CHAGE(1) NAME top chage - change user password expiry information SYNOPSIS top chage [options] LOGIN DESCRIPTION top The chage command changes the number of days between password changes and the date of the last password change. This information is used by the system to determine when a user must change their password. OPTIONS top The options which apply to the chage command are: -d, --lastday LAST_DAY Set the number of days since January 1st, 1970 when the password was last changed. The date may also be expressed in the format YYYY-MM-DD (or the format more commonly used in your area). If the LAST_DAY is set to 0 the user is forced to change his password on the next log on. -E, --expiredate EXPIRE_DATE Set the date or number of days since January 1, 1970 on which the user's account will no longer be accessible. The date may also be expressed in the format YYYY-MM-DD (or the format more commonly used in your area). A user whose account is locked must contact the system administrator before being able to use the system again. For example the following can be used to set an account to expire in 180 days: chage -E $(date -d +180days +%Y-%m-%d) Passing the number -1 as the EXPIRE_DATE will remove an account expiration date. -h, --help Display help message and exit. -i, --iso8601 When printing dates, use YYYY-MM-DD format. -I, --inactive INACTIVE Set the number of days of inactivity after a password has expired before the account is locked. The INACTIVE option is the number of days of inactivity. A user whose account is locked must contact the system administrator before being able to use the system again. Passing the number -1 as the INACTIVE will remove an account's inactivity. -l, --list Show account aging information. -m, --mindays MIN_DAYS Set the minimum number of days between password changes to MIN_DAYS. A value of zero for this field indicates that the user may change their password at any time. -M, --maxdays MAX_DAYS Set the maximum number of days during which a password is valid. When MAX_DAYS plus LAST_DAY is less than the current day, the user will be required to change their password before being able to use their account. This occurrence can be planned for in advance by use of the -W option, which provides the user with advance warning. Passing the number -1 as MAX_DAYS will remove checking a password's validity. -R, --root CHROOT_DIR Apply changes in the CHROOT_DIR directory and use the configuration files from the CHROOT_DIR directory. Only absolute paths are supported. -P, --prefix PREFIX_DIR Apply changes to configuration files under the root filesystem found under the directory PREFIX_DIR. This option does not chroot and is intended for preparing a cross-compilation target. Some limitations: NIS and LDAP users/groups are not verified. PAM authentication is using the host files. No SELINUX support. -W, --warndays WARN_DAYS Set the number of days of warning before a password change is required. The WARN_DAYS option is the number of days prior to the password expiring that a user will be warned their password is about to expire. If none of the options are selected, chage operates in an interactive fashion, prompting the user with the current values for all of the fields. Enter the new value to change the field, or leave the line blank to use the current value. The current value is displayed between a pair of [ ] marks. NOTE top The chage program requires a shadow password file to be available. The chage program will report only the information from the shadow password file. This implies that configuration from other sources (e.g. LDAP or empty password hash field from the passwd file) that affect the user's login will not be shown in the chage output. The chage program will also not report any inconsistency between the shadow and passwd files (e.g. missing x in the passwd file). The pwck can be used to check for this kind of inconsistencies. The chage command is restricted to the root user, except for the -l option, which may be used by an unprivileged user to determine when their password or account is due to expire. CONFIGURATION top The following configuration variables in /etc/login.defs change the behavior of this tool: FILES top /etc/passwd User account information. /etc/shadow Secure user account information. EXIT VALUES top The chage command exits with the following values: 0 success 1 permission denied 2 invalid command syntax 15 can't find the shadow password file SEE ALSO top passwd(5), shadow(5). COLOPHON top This page is part of the shadow-utils (utilities for managing accounts and shadow password files) project. Information about the project can be found at https://github.com/shadow-maint/shadow. If you have a bug report for this manual page, send it to pkg-shadow-devel@alioth-lists.debian.net. This page was obtained from the project's upstream Git repository https://github.com/shadow-maint/shadow on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-15.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org shadow-utils 4.11.1 12/22/2023 CHAGE(1) Pages that refer to this page: shadow(5) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # chage\n\n> Change user account and password expiry information.\n> More information: <https://manned.org/chage>.\n\n- List password information for the user:\n\n`chage --list {{username}}`\n\n- Enable password expiration in 10 days:\n\n`sudo chage --maxdays {{10}} {{username}}`\n\n- Disable password expiration:\n\n`sudo chage --maxdays {{-1}} {{username}}`\n\n- Set account expiration date:\n\n`sudo chage --expiredate {{YYYY-MM-DD}} {{username}}`\n\n- Force user to change password on next log in:\n\n`sudo chage --lastday {{0}} {{username}}`\n |
chattr | chattr(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training chattr(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | ATTRIBUTES | AUTHOR | BUGS AND LIMITATIONS | AVAILABILITY | SEE ALSO | COLOPHON CHATTR(1) General Commands Manual CHATTR(1) NAME top chattr - change file attributes on a Linux file system SYNOPSIS top chattr [ -RVf ] [ -v version ] [ -p project ] [ mode ] files... DESCRIPTION top chattr changes the file attributes on a Linux file system. The format of a symbolic mode is +-=[aAcCdDeFijmPsStTux]. The operator '+' causes the selected attributes to be added to the existing attributes of the files; '-' causes them to be removed; and '=' causes them to be the only attributes that the files have. The letters 'aAcCdDeFijmPsStTux' select the new attributes for the files: append only (a), no atime updates (A), compressed (c), no copy on write (C), no dump (d), synchronous directory updates (D), extent format (e), case-insensitive directory lookups (F), immutable (i), data journaling (j), don't compress (m), project hierarchy (P), secure deletion (s), synchronous updates (S), no tail-merging (t), top of directory hierarchy (T), undeletable (u), and direct access for files (x). The following attributes are read-only, and may be listed by lsattr(1) but not modified by chattr: encrypted (E), indexed directory (I), inline data (N), and verity (V). Not all flags are supported or utilized by all file systems; refer to file system-specific man pages such as btrfs(5), ext4(5), mkfs.f2fs(8), and xfs(5) for more file system-specific details. OPTIONS top -R Recursively change attributes of directories and their contents. -V Be verbose with chattr's output and print the program version. -f Suppress most error messages. -v version Set the file's version/generation number. -p project Set the file's project number. ATTRIBUTES top a A file with the 'a' attribute set can only be opened in append mode for writing. Only the superuser or a process possessing the CAP_LINUX_IMMUTABLE capability can set or clear this attribute. A When a file with the 'A' attribute set is accessed, its atime record is not modified. This avoids a certain amount of disk I/O for laptop systems. c A file with the 'c' attribute set is automatically compressed on the disk by the kernel. A read from this file returns uncompressed data. A write to this file compresses data before storing them on the disk. Note: please make sure to read the bugs and limitations section at the end of this document. (Note: For btrfs, If the 'c' flag is set, then the 'C' flag cannot be set. Also conflicts with btrfs mount option 'nodatasum') C A file with the 'C' attribute set will not be subject to copy-on-write updates. This flag is only supported on file systems which perform copy-on-write. (Note: For btrfs, the 'C' flag should be set on new or empty files. If it is set on a file which already has data blocks, it is undefined when the blocks assigned to the file will be fully stable. If the 'C' flag is set on a directory, it will have no effect on the directory, but new files created in that directory will have the No_COW attribute set. If the 'C' flag is set, then the 'c' flag cannot be set.) d A file with the 'd' attribute set is not a candidate for backup when the dump(8) program is run. D When a directory with the 'D' attribute set is modified, the changes are written synchronously to the disk; this is equivalent to the 'dirsync' mount option applied to a subset of the files. e The 'e' attribute indicates that the file is using extents for mapping the blocks on disk. It may not be removed using chattr(1). E A file, directory, or symlink with the 'E' attribute set is encrypted by the file system. This attribute may not be set or cleared using chattr(1), although it can be displayed by lsattr(1). F A directory with the 'F' attribute set indicates that all the path lookups inside that directory are made in a case- insensitive fashion. This attribute can only be changed in empty directories on file systems with the casefold feature enabled. i A file with the 'i' attribute cannot be modified: it cannot be deleted or renamed, no link can be created to this file, most of the file's metadata can not be modified, and the file can not be opened in write mode. Only the superuser or a process possessing the CAP_LINUX_IMMUTABLE capability can set or clear this attribute. I The 'I' attribute is used by the htree code to indicate that a directory is being indexed using hashed trees. It may not be set or cleared using chattr(1), although it can be displayed by lsattr(1). j A file with the 'j' attribute has all of its data written to the ext3 or ext4 journal before being written to the file itself, if the file system is mounted with the "data=ordered" or "data=writeback" options and the file system has a journal. When the file system is mounted with the "data=journal" option all file data is already journalled and this attribute has no effect. Only the superuser or a process possessing the CAP_SYS_RESOURCE capability can set or clear this attribute. m A file with the 'm' attribute is excluded from compression on file systems that support per-file compression. N A file with the 'N' attribute set indicates that the file has data stored inline, within the inode itself. It may not be set or cleared using chattr(1), although it can be displayed by lsattr(1). P A directory with the 'P' attribute set will enforce a hierarchical structure for project id's. This means that files and directories created in the directory will inherit the project id of the directory, rename operations are constrained so when a file or directory is moved into another directory, that the project ids must match. In addition, a hard link to file can only be created when the project id for the file and the destination directory match. s When a file with the 's' attribute set is deleted, its blocks are zeroed and written back to the disk. Note: please make sure to read the bugs and limitations section at the end of this document. S When a file with the 'S' attribute set is modified, the changes are written synchronously to the disk; this is equivalent to the 'sync' mount option applied to a subset of the files. t A file with the 't' attribute will not have a partial block fragment at the end of the file merged with other files (for those file systems which support tail-merging). This is necessary for applications such as LILO which read the file system directly, and which don't understand tail- merged files. Note: As of this writing, the ext2, ext3, and ext4 file systems do not support tail-merging. T A directory with the 'T' attribute will be deemed to be the top of directory hierarchies for the purposes of the Orlov block allocator. This is a hint to the block allocator used by ext3 and ext4 that the subdirectories under this directory are not related, and thus should be spread apart for allocation purposes. For example it is a very good idea to set the 'T' attribute on the /home directory, so that /home/john and /home/mary are placed into separate block groups. For directories where this attribute is not set, the Orlov block allocator will try to group subdirectories closer together where possible. u When a file with the 'u' attribute set is deleted, its contents are saved. This allows the user to ask for its undeletion. Note: please make sure to read the bugs and limitations section at the end of this document. x A file with the 'x' requests the use of direct access (dax) mode, if the kernel supports DAX. This can be overridden by the 'dax=never' mount option. For more information see the kernel documentation for dax: <https://www.kernel.org/doc/html/latest/filesystems/dax.html>. If the attribute is set on an existing directory, it will be inherited by all files and subdirectories that are subsequently created in the directory. If an existing directory has contained some files and subdirectories, modifying the attribute on the parent directory doesn't change the attributes on these files and subdirectories. V A file with the 'V' attribute set has fs-verity enabled. It cannot be written to, and the file system will automatically verify all data read from it against a cryptographic hash that covers the entire file's contents, e.g. via a Merkle tree. This makes it possible to efficiently authenticate the file. This attribute may not be set or cleared using chattr(1), although it can be displayed by lsattr(1). AUTHOR top chattr was written by Remy Card <Remy.Card@linux.org>. It is currently being maintained by Theodore Ts'o <tytso@alum.mit.edu>. BUGS AND LIMITATIONS top The 'c', 's', and 'u' attributes are not honored by the ext2, ext3, and ext4 file systems as implemented in the current mainline Linux kernels. Setting 'a' and 'i' attributes will not affect the ability to write to already existing file descriptors. The 'j' option is only useful for ext3 and ext4 file systems. The 'D' option is only useful on Linux kernel 2.5.19 and later. AVAILABILITY top chattr is part of the e2fsprogs package and is available from http://e2fsprogs.sourceforge.net. SEE ALSO top lsattr(1), btrfs(5), ext4(5), mkfs.f2fs(8), xfs(5). COLOPHON top This page is part of the e2fsprogs (utilities for ext2/3/4 filesystems) project. Information about the project can be found at http://e2fsprogs.sourceforge.net/. It is not known how to report bugs for this man page; if you know, please send a mail to man-pages@man7.org. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/fs/ext2/e2fsprogs.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-07.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org E2fsprogs version 1.47.0 February 2023 CHATTR(1) Pages that refer to this page: chattr(1), lsattr(1), rm(1), systemd-dissect(1), fallocate(2), ioctl_iflags(2), mount(2), statx(2), utime(2), utimensat(2), ext4(5), sysupdate.d(5), tmpfiles.d(5), xfs(5), btrfs-property(8), xfsdump(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # chattr\n\n> Change attributes of files or directories.\n> More information: <https://manned.org/chattr>.\n\n- Make a file or directory immutable to changes and deletion, even by superuser:\n\n`chattr +i {{path/to/file_or_directory}}`\n\n- Make a file or directory mutable:\n\n`chattr -i {{path/to/file_or_directory}}`\n\n- Recursively make an entire directory and contents immutable:\n\n`chattr -R +i {{path/to/directory}}`\n |
chcon | chcon(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training chcon(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | AUTHOR | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON CHCON(1) User Commands CHCON(1) NAME top chcon - change file security context SYNOPSIS top chcon [OPTION]... CONTEXT FILE... chcon [OPTION]... [-u USER] [-r ROLE] [-l RANGE] [-t TYPE] FILE... chcon [OPTION]... --reference=RFILE FILE... DESCRIPTION top Change the SELinux security context of each FILE to CONTEXT. With --reference, change the security context of each FILE to that of RFILE. Mandatory arguments to long options are mandatory for short options too. --dereference affect the referent of each symbolic link (this is the default), rather than the symbolic link itself -h, --no-dereference affect symbolic links instead of any referenced file -u, --user=USER set user USER in the target security context -r, --role=ROLE set role ROLE in the target security context -t, --type=TYPE set type TYPE in the target security context -l, --range=RANGE set range RANGE in the target security context --no-preserve-root do not treat '/' specially (the default) --preserve-root fail to operate recursively on '/' --reference=RFILE use RFILE's security context rather than specifying a CONTEXT value -R, --recursive operate on files and directories recursively -v, --verbose output a diagnostic for every file processed The following options modify how a hierarchy is traversed when the -R option is also specified. If more than one is specified, only the final one takes effect. -H if a command line argument is a symbolic link to a directory, traverse it -L traverse every symbolic link to a directory encountered -P do not traverse any symbolic links (default) --help display this help and exit --version output version information and exit AUTHOR top Written by Russell Coker and Jim Meyering. REPORTING BUGS top GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT top Copyright 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top Full documentation <https://www.gnu.org/software/coreutils/chcon> or available locally via: info '(coreutils) chcon invocation' COLOPHON top This page is part of the coreutils (basic file, shell and text manipulation utilities) project. Information about the project can be found at http://www.gnu.org/software/coreutils/. If you have a bug report for this manual page, see http://www.gnu.org/software/coreutils/. This page was obtained from the tarball coreutils-9.4.tar.xz fetched from http://ftp.gnu.org/gnu/coreutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU coreutils 9.4 August 2023 CHCON(1) Pages that refer to this page: secon(1), setrans.conf(5), chcat(8), mcs(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # chcon\n\n> Change SELinux security context of a file or files/directories.\n> More information: <https://www.gnu.org/software/coreutils/chcon>.\n\n- View security context of a file:\n\n`ls -lZ {{path/to/file}}`\n\n- Change the security context of a target file, using a reference file:\n\n`chcon --reference={{reference_file}} {{target_file}}`\n\n- Change the full SELinux security context of a file:\n\n`chcon {{user}}:{{role}}:{{type}}:{{range/level}} {{filename}}`\n\n- Change only the user part of SELinux security context:\n\n`chcon -u {{user}} {{filename}}`\n\n- Change only the role part of SELinux security context:\n\n`chcon -r {{role}} {{filename}}`\n\n- Change only the type part of SELinux security context:\n\n`chcon -t {{type}} {{filename}}`\n\n- Change only the range/level part of SELinux security context:\n\n`chcon -l {{range/level}} {{filename}}`\n |
chcpu | chcpu(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training chcpu(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | EXIT STATUS | AUTHORS | COPYRIGHT | SEE ALSO | REPORTING BUGS | AVAILABILITY CHCPU(8) System Administration CHCPU(8) NAME top chcpu - configure CPUs SYNOPSIS top chcpu -c|-d|-e|-g cpu-list chcpu -p mode chcpu -r|-h|-V DESCRIPTION top chcpu can modify the state of CPUs. It can enable or disable CPUs, scan for new CPUs, change the CPU dispatching mode of the underlying hypervisor, and request CPUs from the hypervisor (configure) or return CPUs to the hypervisor (deconfigure). Some options have a cpu-list argument. Use this argument to specify a comma-separated list of CPUs. The list can contain individual CPU addresses or ranges of addresses. For example, 0,5,7,9-11 makes the command applicable to the CPUs with the addresses 0, 5, 7, 9, 10, and 11. OPTIONS top -c, --configure cpu-list Configure the specified CPUs. Configuring a CPU means that the hypervisor takes a CPU from the CPU pool and assigns it to the virtual hardware on which your kernel runs. -d, --disable cpu-list Disable the specified CPUs. Disabling a CPU means that the kernel sets it offline. -e, --enable cpu-list Enable the specified CPUs. Enabling a CPU means that the kernel sets it online. A CPU must be configured, see -c, before it can be enabled. -g, --deconfigure cpu-list Deconfigure the specified CPUs. Deconfiguring a CPU means that the hypervisor removes the CPU from the virtual hardware on which the Linux instance runs and returns it to the CPU pool. A CPU must be offline, see -d, before it can be deconfigured. -p, --dispatch mode Set the CPU dispatching mode (polarization). This option has an effect only if your hardware architecture and hypervisor support CPU polarization. Available modes are: horizontal The workload is spread across all available CPUs. vertical The workload is concentrated on few CPUs. -r, --rescan Trigger a rescan of CPUs. After a rescan, the Linux kernel recognizes the new CPUs. Use this option on systems that do not automatically detect newly attached CPUs. -h, --help Display help text and exit. -V, --version Print version and exit. EXIT STATUS top chcpu has the following exit status values: 0 success 1 failure 64 partial success AUTHORS top Heiko Carstens <heiko.carstens@de.ibm.com> COPYRIGHT top Copyright IBM Corp. 2011 SEE ALSO top lscpu(1) REPORTING BUGS top For bug reports, use the issue tracker at https://github.com/util-linux/util-linux/issues. AVAILABILITY top The chcpu command is part of the util-linux package which can be downloaded from Linux Kernel Archive <https://www.kernel.org/pub/linux/utils/util-linux/>. This page is part of the util-linux (a random collection of Linux utilities) project. Information about the project can be found at https://www.kernel.org/pub/linux/utils/util-linux/. If you have a bug report for this manual page, send it to util-linux@vger.kernel.org. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/utils/util-linux/util-linux.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-14.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org util-linux 2.39.594-1e0ad 2023-07-19 CHCPU(8) Pages that refer to this page: lscpu(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # chcpu\n\n> Enable/disable a system's CPUs.\n> More information: <https://manned.org/chcpu>.\n\n- Disable one or more CPUs by their IDs:\n\n`chcpu -d {{1,3}}`\n\n- Enable one or more ranges of CPUs by their IDs:\n\n`chcpu -e {{1-3,5-7}}`\n |
chfn | chfn(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training Another version of this page is provided by the shadow-utils project chfn(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | CONFIG FILE ITEMS | EXIT STATUS | AUTHORS | SEE ALSO | REPORTING BUGS | AVAILABILITY CHFN(1) User Commands CHFN(1) NAME top chfn - change your finger information SYNOPSIS top chfn [-f full-name] [-o office] [-p office-phone] [-h home-phone] [-u] [-V] [username] DESCRIPTION top chfn is used to change your finger information. This information is stored in the /etc/passwd file, and is displayed by the finger program. The Linux finger command will display four pieces of information that can be changed by chfn: your real name, your work room and phone, and your home phone. Any of the four pieces of information can be specified on the command line. If no information is given on the command line, chfn enters interactive mode. In interactive mode, chfn will prompt for each field. At a prompt, you can enter the new information, or just press return to leave the field unchanged. Enter the keyword "none" to make the field blank. chfn supports non-local entries (kerberos, LDAP, etc.) if linked with libuser, otherwise use ypchfn(1), lchfn(1) or any other implementation for non-local entries. OPTIONS top -f, --full-name full-name Specify your real name. -o, --office office Specify your office room number. -p, --office-phone office-phone Specify your office phone number. -h, --home-phone home-phone Specify your home phone number. -u, --help Display help text and exit. -V, --version Print version and exit. The short options -V have been used since version 2.39; old versions use deprecated -v. -h, --help Display help text and exit. -V, --version Print version and exit. CONFIG FILE ITEMS top chfn reads the /etc/login.defs configuration file (see login.defs(5)). Note that the configuration file could be distributed with another package (e.g., shadow-utils). The following configuration items are relevant for chfn: CHFN_RESTRICT string Indicate which fields are changeable by chfn. The boolean setting "yes" means that only the Office, Office Phone and Home Phone fields are changeable, and boolean setting "no" means that also the Full Name is changeable. Another way to specify changeable fields is by abbreviations: f = Full Name, r = Office (room), w = Office (work) Phone, h = Home Phone. For example, CHFN_RESTRICT "wh" allows changing work and home phone numbers. If CHFN_RESTRICT is undefined, then all finger information is read-only. This is the default. EXIT STATUS top Returns 0 if operation was successful, 1 if operation failed or command syntax was not valid. AUTHORS top Salvatore Valente <svalente@mit.edu> SEE ALSO top chsh(1), finger(1), login.defs(5), passwd(5) REPORTING BUGS top For bug reports, use the issue tracker at https://github.com/util-linux/util-linux/issues. AVAILABILITY top The chfn command is part of the util-linux package which can be downloaded from Linux Kernel Archive <https://www.kernel.org/pub/linux/utils/util-linux/>. This page is part of the util-linux (a random collection of Linux utilities) project. Information about the project can be found at https://www.kernel.org/pub/linux/utils/util-linux/. If you have a bug report for this manual page, send it to util-linux@vger.kernel.org. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/utils/util-linux/util-linux.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-14.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org util-linux 2.39.594-1e0ad 2023-07-19 CHFN(1) Pages that refer to this page: chsh(1@@shadow-utils), passwd(5), groupadd(8), groupdel(8), groupmems(8), groupmod(8), useradd(8), userdel(8), usermod(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # chfn\n\n> Update `finger` info for a user.\n> More information: <https://manned.org/chfn>.\n\n- Update a user's "Name" field in the output of `finger`:\n\n`chfn -f {{new_display_name}} {{username}}`\n\n- Update a user's "Office Room Number" field for the output of `finger`:\n\n`chfn -o {{new_office_room_number}} {{username}}`\n\n- Update a user's "Office Phone Number" field for the output of `finger`:\n\n`chfn -p {{new_office_telephone_number}} {{username}}`\n\n- Update a user's "Home Phone Number" field for the output of `finger`:\n\n`chfn -h {{new_home_telephone_number}} {{username}}`\n |
chgrp | chgrp(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training chgrp(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | EXAMPLES | AUTHOR | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON CHGRP(1) User Commands CHGRP(1) NAME top chgrp - change group ownership SYNOPSIS top chgrp [OPTION]... GROUP FILE... chgrp [OPTION]... --reference=RFILE FILE... DESCRIPTION top Change the group of each FILE to GROUP. With --reference, change the group of each FILE to that of RFILE. -c, --changes like verbose but report only when a change is made -f, --silent, --quiet suppress most error messages -v, --verbose output a diagnostic for every file processed --dereference affect the referent of each symbolic link (this is the default), rather than the symbolic link itself -h, --no-dereference affect symbolic links instead of any referenced file (useful only on systems that can change the ownership of a symlink) --no-preserve-root do not treat '/' specially (the default) --preserve-root fail to operate recursively on '/' --reference=RFILE use RFILE's group rather than specifying a GROUP. RFILE is always dereferenced if a symbolic link. -R, --recursive operate on files and directories recursively The following options modify how a hierarchy is traversed when the -R option is also specified. If more than one is specified, only the final one takes effect. -H if a command line argument is a symbolic link to a directory, traverse it -L traverse every symbolic link to a directory encountered -P do not traverse any symbolic links (default) --help display this help and exit --version output version information and exit EXAMPLES top chgrp staff /u Change the group of /u to "staff". chgrp -hR staff /u Change the group of /u and subfiles to "staff". AUTHOR top Written by David MacKenzie and Jim Meyering. REPORTING BUGS top GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT top Copyright 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top chown(1), chown(2) Full documentation <https://www.gnu.org/software/coreutils/chgrp> or available locally via: info '(coreutils) chgrp invocation' COLOPHON top This page is part of the coreutils (basic file, shell and text manipulation utilities) project. Information about the project can be found at http://www.gnu.org/software/coreutils/. If you have a bug report for this manual page, see http://www.gnu.org/software/coreutils/. This page was obtained from the tarball coreutils-9.4.tar.xz fetched from http://ftp.gnu.org/gnu/coreutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU coreutils 9.4 August 2023 CHGRP(1) Pages that refer to this page: chown(2), group(5), symlink(7) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # chgrp\n\n> Change group ownership of files and directories.\n> More information: <https://www.gnu.org/software/coreutils/chgrp>.\n\n- Change the owner group of a file/directory:\n\n`chgrp {{group}} {{path/to/file_or_directory}}`\n\n- Recursively change the owner group of a directory and its contents:\n\n`chgrp -R {{group}} {{path/to/directory}}`\n\n- Change the owner group of a symbolic link:\n\n`chgrp -h {{group}} {{path/to/symlink}}`\n\n- Change the owner group of a file/directory to match a reference file:\n\n`chgrp --reference={{path/to/reference_file}} {{path/to/file_or_directory}}`\n |
chmod | chmod(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training chmod(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | SETUID AND SETGID BITS | RESTRICTED DELETION FLAG OR STICKY BIT | OPTIONS | AUTHOR | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON CHMOD(1) User Commands CHMOD(1) NAME top chmod - change file mode bits SYNOPSIS top chmod [OPTION]... MODE[,MODE]... FILE... chmod [OPTION]... OCTAL-MODE FILE... chmod [OPTION]... --reference=RFILE FILE... DESCRIPTION top This manual page documents the GNU version of chmod. chmod changes the file mode bits of each given file according to mode, which can be either a symbolic representation of changes to make, or an octal number representing the bit pattern for the new mode bits. The format of a symbolic mode is [ugoa...][[-+=][perms...]...], where perms is either zero or more letters from the set rwxXst, or a single letter from the set ugo. Multiple symbolic modes can be given, separated by commas. A combination of the letters ugoa controls which users' access to the file will be changed: the user who owns it (u), other users in the file's group (g), other users not in the file's group (o), or all users (a). If none of these are given, the effect is as if (a) were given, but bits that are set in the umask are not affected. The operator + causes the selected file mode bits to be added to the existing file mode bits of each file; - causes them to be removed; and = causes them to be added and causes unmentioned bits to be removed except that a directory's unmentioned set user and group ID bits are not affected. The letters rwxXst select file mode bits for the affected users: read (r), write (w), execute (or search for directories) (x), execute/search only if the file is a directory or already has execute permission for some user (X), set user or group ID on execution (s), restricted deletion flag or sticky bit (t). Instead of one or more of these letters, you can specify exactly one of the letters ugo: the permissions granted to the user who owns the file (u), the permissions granted to other users who are members of the file's group (g), and the permissions granted to users that are in neither of the two preceding categories (o). A numeric mode is from one to four octal digits (0-7), derived by adding up the bits with values 4, 2, and 1. Omitted digits are assumed to be leading zeros. The first digit selects the set user ID (4) and set group ID (2) and restricted deletion or sticky (1) attributes. The second digit selects permissions for the user who owns the file: read (4), write (2), and execute (1); the third selects permissions for other users in the file's group, with the same values; and the fourth for other users not in the file's group, with the same values. chmod never changes the permissions of symbolic links; the chmod system call cannot change their permissions. This is not a problem since the permissions of symbolic links are never used. However, for each symbolic link listed on the command line, chmod changes the permissions of the pointed-to file. In contrast, chmod ignores symbolic links encountered during recursive directory traversals. SETUID AND SETGID BITS top chmod clears the set-group-ID bit of a regular file if the file's group ID does not match the user's effective group ID or one of the user's supplementary group IDs, unless the user has appropriate privileges. Additional restrictions may cause the set-user-ID and set-group-ID bits of MODE or RFILE to be ignored. This behavior depends on the policy and functionality of the underlying chmod system call. When in doubt, check the underlying system behavior. For directories chmod preserves set-user-ID and set-group-ID bits unless you explicitly specify otherwise. You can set or clear the bits with symbolic modes like u+s and g-s. To clear these bits for directories with a numeric mode requires an additional leading zero like 00755, leading minus like -6000, or leading equals like =755. RESTRICTED DELETION FLAG OR STICKY BIT top The restricted deletion flag or sticky bit is a single bit, whose interpretation depends on the file type. For directories, it prevents unprivileged users from removing or renaming a file in the directory unless they own the file or the directory; this is called the restricted deletion flag for the directory, and is commonly found on world-writable directories like /tmp. For regular files on some older systems, the bit saves the program's text image on the swap device so it will load more quickly when run; this is called the sticky bit. OPTIONS top Change the mode of each FILE to MODE. With --reference, change the mode of each FILE to that of RFILE. -c, --changes like verbose but report only when a change is made -f, --silent, --quiet suppress most error messages -v, --verbose output a diagnostic for every file processed --no-preserve-root do not treat '/' specially (the default) --preserve-root fail to operate recursively on '/' --reference=RFILE use RFILE's mode instead of specifying MODE values. RFILE is always dereferenced if a symbolic link. -R, --recursive change files and directories recursively --help display this help and exit --version output version information and exit Each MODE is of the form '[ugoa]*([-+=]([rwxXst]*|[ugo]))+|[-+=][0-7]+'. AUTHOR top Written by David MacKenzie and Jim Meyering. REPORTING BUGS top GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT top Copyright 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top chmod(2) Full documentation <https://www.gnu.org/software/coreutils/chmod> or available locally via: info '(coreutils) chmod invocation' COLOPHON top This page is part of the coreutils (basic file, shell and text manipulation utilities) project. Information about the project can be found at http://www.gnu.org/software/coreutils/. If you have a bug report for this manual page, see http://www.gnu.org/software/coreutils/. This page was obtained from the tarball coreutils-9.4.tar.xz fetched from http://ftp.gnu.org/gnu/coreutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU coreutils 9.4 August 2023 CHMOD(1) Pages that refer to this page: bash(1), chacl(1), find(1), nfs4_setfacl(1), rsync(1), setfacl(1), chmod(2), fcntl(2), lp(4), rsyncd.conf(5), path_resolution(7), symlink(7), xattr(7) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # chmod\n\n> Change the access permissions of a file or directory.\n> More information: <https://www.gnu.org/software/coreutils/chmod>.\n\n- Give the [u]ser who owns a file the right to e[x]ecute it:\n\n`chmod u+x {{path/to/file}}`\n\n- Give the [u]ser rights to [r]ead and [w]rite to a file/directory:\n\n`chmod u+rw {{path/to/file_or_directory}}`\n\n- Remove e[x]ecutable rights from the [g]roup:\n\n`chmod g-x {{path/to/file}}`\n\n- Give [a]ll users rights to [r]ead and e[x]ecute:\n\n`chmod a+rx {{path/to/file}}`\n\n- Give [o]thers (not in the file owner's group) the same rights as the [g]roup:\n\n`chmod o=g {{path/to/file}}`\n\n- Remove all rights from [o]thers:\n\n`chmod o= {{path/to/file}}`\n\n- Change permissions recursively giving [g]roup and [o]thers the ability to [w]rite:\n\n`chmod -R g+w,o+w {{path/to/directory}}`\n\n- Recursively give [a]ll users [r]ead permissions to files and e[X]ecute permissions to sub-directories within a directory:\n\n`chmod -R a+rX {{path/to/directory}}`\n |
choom | choom(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training choom(1) Linux manual page NAME | DESCRIPTION | OPTIONS | NOTES | AUTHORS | SEE ALSO | REPORTING BUGS | AVAILABILITY CHOOM(1) User Commands CHOOM(1) NAME top choom - display and adjust OOM-killer score. choom -p PID choom -p PID -n number choom -n number [--] command [argument ...] DESCRIPTION top The choom command displays and adjusts Out-Of-Memory killer score setting. OPTIONS top -p, --pid pid Specifies process ID. -n, --adjust value Specify the adjust score value. -h, --help Display help text and exit. -V, --version Print version and exit. NOTES top Linux kernel uses the badness heuristic to select which process gets killed in out of memory conditions. The badness heuristic assigns a value to each candidate task ranging from 0 (never kill) to 1000 (always kill) to determine which process is targeted. The units are roughly a proportion along that range of allowed memory the process may allocate from based on an estimation of its current memory and swap use. For example, if a task is using all allowed memory, its badness score will be 1000. If it is using half of its allowed memory, its score will be 500. There is an additional factor included in the badness score: the current memory and swap usage is discounted by 3% for root processes. The amount of "allowed" memory depends on the context in which the oom killer was called. If it is due to the memory assigned to the allocating tasks cpuset being exhausted, the allowed memory represents the set of mems assigned to that cpuset. If it is due to a mempolicys node(s) being exhausted, the allowed memory represents the set of mempolicy nodes. If it is due to a memory limit (or swap limit) being reached, the allowed memory is that configured limit. Finally, if it is due to the entire system being out of memory, the allowed memory represents all allocatable resources. The adjust score value is added to the badness score before it is used to determine which task to kill. Acceptable values range from -1000 to +1000. This allows userspace to polarize the preference for oom killing either by always preferring a certain task or completely disabling it. The lowest possible value, -1000, is equivalent to disabling oom killing entirely for that task since it will always report a badness score of 0. Setting an adjust score value of +500, for example, is roughly equivalent to allowing the remainder of tasks sharing the same system, cpuset, mempolicy, or memory controller resources to use at least 50% more memory. A value of -500, on the other hand, would be roughly equivalent to discounting 50% of the tasks allowed memory from being considered as scoring against the task. AUTHORS top Karel Zak <kzak@redhat.com> SEE ALSO top proc(5) REPORTING BUGS top For bug reports, use the issue tracker at https://github.com/util-linux/util-linux/issues. AVAILABILITY top The choom command is part of the util-linux package which can be downloaded from Linux Kernel Archive <https://www.kernel.org/pub/linux/utils/util-linux/>. This page is part of the util-linux (a random collection of Linux utilities) project. Information about the project can be found at https://www.kernel.org/pub/linux/utils/util-linux/. If you have a bug report for this manual page, send it to util-linux@vger.kernel.org. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/utils/util-linux/util-linux.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-14.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org util-linux 2.39.594-1e0ad 2023-07-19 CHOOM(1) Pages that refer to this page: proc(5) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # choom\n\n> Display and change the adjust out-of-memory killer score.\n> More information: <https://manned.org/choom>.\n\n- Display the OOM-killer score of the process with a specific ID:\n\n`choom -p {{pid}}`\n\n- Change the adjust OOM-killer score of a specific process:\n\n`choom -p {{pid}} -n {{-1000..+1000}}`\n\n- Run a command with a specific adjust OOM-killer score:\n\n`choom -n {{-1000..+1000}} {{command}} {{argument1 argument2 ...}}`\n |
chown | chown(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training chown(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | EXAMPLES | AUTHOR | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON CHOWN(1) User Commands CHOWN(1) NAME top chown - change file owner and group SYNOPSIS top chown [OPTION]... [OWNER][:[GROUP]] FILE... chown [OPTION]... --reference=RFILE FILE... DESCRIPTION top This manual page documents the GNU version of chown. chown changes the user and/or group ownership of each given file. If only an owner (a user name or numeric user ID) is given, that user is made the owner of each given file, and the files' group is not changed. If the owner is followed by a colon and a group name (or numeric group ID), with no spaces between them, the group ownership of the files is changed as well. If a colon but no group name follows the user name, that user is made the owner of the files and the group of the files is changed to that user's login group. If the colon and group are given, but the owner is omitted, only the group of the files is changed; in this case, chown performs the same function as chgrp. If only a colon is given, or if the entire operand is empty, neither the owner nor the group is changed. OPTIONS top Change the owner and/or group of each FILE to OWNER and/or GROUP. With --reference, change the owner and group of each FILE to those of RFILE. -c, --changes like verbose but report only when a change is made -f, --silent, --quiet suppress most error messages -v, --verbose output a diagnostic for every file processed --dereference affect the referent of each symbolic link (this is the default), rather than the symbolic link itself -h, --no-dereference affect symbolic links instead of any referenced file (useful only on systems that can change the ownership of a symlink) --from=CURRENT_OWNER:CURRENT_GROUP change the owner and/or group of each file only if its current owner and/or group match those specified here. Either may be omitted, in which case a match is not required for the omitted attribute --no-preserve-root do not treat '/' specially (the default) --preserve-root fail to operate recursively on '/' --reference=RFILE use RFILE's owner and group rather than specifying OWNER:GROUP values. RFILE is always dereferenced. -R, --recursive operate on files and directories recursively The following options modify how a hierarchy is traversed when the -R option is also specified. If more than one is specified, only the final one takes effect. -H if a command line argument is a symbolic link to a directory, traverse it -L traverse every symbolic link to a directory encountered -P do not traverse any symbolic links (default) --help display this help and exit --version output version information and exit Owner is unchanged if missing. Group is unchanged if missing, but changed to login group if implied by a ':' following a symbolic OWNER. OWNER and GROUP may be numeric as well as symbolic. EXAMPLES top chown root /u Change the owner of /u to "root". chown root:staff /u Likewise, but also change its group to "staff". chown -hR root /u Change the owner of /u and subfiles to "root". AUTHOR top Written by David MacKenzie and Jim Meyering. REPORTING BUGS top GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT top Copyright 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top chown(2) Full documentation <https://www.gnu.org/software/coreutils/chown> or available locally via: info '(coreutils) chown invocation' COLOPHON top This page is part of the coreutils (basic file, shell and text manipulation utilities) project. Information about the project can be found at http://www.gnu.org/software/coreutils/. If you have a bug report for this manual page, see http://www.gnu.org/software/coreutils/. This page was obtained from the tarball coreutils-9.4.tar.xz fetched from http://ftp.gnu.org/gnu/coreutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU coreutils 9.4 August 2023 CHOWN(1) Pages that refer to this page: chgrp(1), chown(2), fd(4), hd(4), initrd(4), lp(4), mem(4), null(4), ram(4), tty(4), ttyS(4), symlink(7), sm-notify(8), start-stop-daemon(8), statd(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # chown\n\n> Change user and group ownership of files and directories.\n> More information: <https://www.gnu.org/software/coreutils/chown>.\n\n- Change the owner user of a file/directory:\n\n`chown {{user}} {{path/to/file_or_directory}}`\n\n- Change the owner user and group of a file/directory:\n\n`chown {{user}}:{{group}} {{path/to/file_or_directory}}`\n\n- Change the owner user and group to both have the name `user`:\n\n`chown {{user}}: {{path/to/file_or_directory}}`\n\n- Recursively change the owner of a directory and its contents:\n\n`chown -R {{user}} {{path/to/directory}}`\n\n- Change the owner of a symbolic link:\n\n`chown -h {{user}} {{path/to/symlink}}`\n\n- Change the owner of a file/directory to match a reference file:\n\n`chown --reference={{path/to/reference_file}} {{path/to/file_or_directory}}`\n |
chpasswd | chpasswd(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training chpasswd(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | CAVEATS | CONFIGURATION | FILES | SEE ALSO | COLOPHON CHPASSWD(8) System Management Commands CHPASSWD(8) NAME top chpasswd - update passwords in batch mode SYNOPSIS top chpasswd [options] DESCRIPTION top The chpasswd command reads a list of user name and password pairs from standard input and uses this information to update a group of existing users. Each line is of the format: user_name:password By default the passwords must be supplied in clear-text, and are encrypted by chpasswd. Also the password age will be updated, if present. By default, passwords are encrypted by PAM, but (even if not recommended) you can select a different encryption method with the -e, -m, or -c options. Except when PAM is used to encrypt the passwords, chpasswd first updates all the passwords in memory, and then commits all the changes to disk if no errors occurred for any user. When PAM is used to encrypt the passwords (and update the passwords in the system database) then if a password cannot be updated chpasswd continues updating the passwords of the next users, and will return an error code on exit. This command is intended to be used in a large system environment where many accounts are created at a single time. OPTIONS top The options which apply to the chpasswd command are: -c, --crypt-method METHOD Use the specified method to encrypt the passwords. The available methods are DES, MD5, NONE, and SHA256 or SHA512 if your libc support these methods. By default, PAM is used to encrypt the passwords. -e, --encrypted Supplied passwords are in encrypted form. -h, --help Display help message and exit. -m, --md5 Use MD5 encryption instead of DES when the supplied passwords are not encrypted. -R, --root CHROOT_DIR Apply changes in the CHROOT_DIR directory and use the configuration files from the CHROOT_DIR directory. Only absolute paths are supported. -P, --prefix PREFIX_DIR Apply changes to configuration files under the root filesystem found under the directory PREFIX_DIR. This option does not chroot and is intended for preparing a cross-compilation target. Some limitations: NIS and LDAP users/groups are not verified. PAM authentication is using the host files. No SELINUX support. -s, --sha-rounds ROUNDS Use the specified number of rounds to encrypt the passwords. The value 0 means that the system will choose the default number of rounds for the crypt method (5000). A minimal value of 1000 and a maximal value of 999,999,999 will be enforced. You can only use this option with the SHA256 or SHA512 crypt method. By default, the number of rounds is defined by the SHA_CRYPT_MIN_ROUNDS and SHA_CRYPT_MAX_ROUNDS variables in /etc/login.defs. CAVEATS top Remember to set permissions or umask to prevent readability of unencrypted files by other users. CONFIGURATION top The following configuration variables in /etc/login.defs change the behavior of this tool: SHA_CRYPT_MIN_ROUNDS (number), SHA_CRYPT_MAX_ROUNDS (number) When ENCRYPT_METHOD is set to SHA256 or SHA512, this defines the number of SHA rounds used by the encryption algorithm by default (when the number of rounds is not specified on the command line). With a lot of rounds, it is more difficult to brute forcing the password. But note also that more CPU resources will be needed to authenticate users. If not specified, the libc will choose the default number of rounds (5000), which is orders of magnitude too low for modern hardware. The values must be inside the 1000-999,999,999 range. If only one of the SHA_CRYPT_MIN_ROUNDS or SHA_CRYPT_MAX_ROUNDS values is set, then this value will be used. If SHA_CRYPT_MIN_ROUNDS > SHA_CRYPT_MAX_ROUNDS, the highest value will be used. Note: This only affect the generation of group passwords. The generation of user passwords is done by PAM and subject to the PAM configuration. It is recommended to set this variable consistently with the PAM configuration. FILES top /etc/passwd User account information. /etc/shadow Secure user account information. /etc/login.defs Shadow password suite configuration. /etc/pam.d/chpasswd PAM configuration for chpasswd. SEE ALSO top passwd(1), newusers(8), login.defs(5), useradd(8). COLOPHON top This page is part of the shadow-utils (utilities for managing accounts and shadow password files) project. Information about the project can be found at https://github.com/shadow-maint/shadow. If you have a bug report for this manual page, send it to pkg-shadow-devel@alioth-lists.debian.net. This page was obtained from the project's upstream Git repository https://github.com/shadow-maint/shadow on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-15.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org shadow-utils 4.11.1 12/22/2023 CHPASSWD(8) Pages that refer to this page: passwd(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # chpasswd\n\n> Change the passwords for multiple users by using `stdin`.\n> More information: <https://manned.org/chpasswd.8>.\n\n- Change the password for a specific user:\n\n`printf "{{username}}:{{new_password}}" | sudo chpasswd`\n\n- Change the passwords for multiple users (The input text must not contain any spaces.):\n\n`printf "{{username_1}}:{{new_password_1}}\n{{username_2}}:{{new_password_2}}" | sudo chpasswd`\n\n- Change the password for a specific user, and specify it in encrypted form:\n\n`printf "{{username}}:{{new_encrypted_password}}" | sudo chpasswd --encrypted`\n\n- Change the password for a specific user, and use a specific encryption for the stored password:\n\n`printf "{{username}}:{{new_password}}" | sudo chpasswd --crypt-method {{NONE|DES|MD5|SHA256|SHA512}}`\n |
chroot | chroot(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training chroot(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | AUTHOR | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON CHROOT(1) User Commands CHROOT(1) NAME top chroot - run command or interactive shell with special root directory SYNOPSIS top chroot [OPTION] NEWROOT [COMMAND [ARG]...] chroot OPTION DESCRIPTION top Run COMMAND with root directory set to NEWROOT. --groups=G_LIST specify supplementary groups as g1,g2,..,gN --userspec=USER:GROUP specify user and group (ID or name) to use --skip-chdir do not change working directory to '/' --help display this help and exit --version output version information and exit If no command is given, run '"$SHELL" -i' (default: '/bin/sh -i'). Exit status: 125 if the chroot command itself fails 126 if COMMAND is found but cannot be invoked 127 if COMMAND cannot be found - the exit status of COMMAND otherwise AUTHOR top Written by Roland McGrath. REPORTING BUGS top GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT top Copyright 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top chroot(2) Full documentation <https://www.gnu.org/software/coreutils/chroot> or available locally via: info '(coreutils) chroot invocation' COLOPHON top This page is part of the coreutils (basic file, shell and text manipulation utilities) project. Information about the project can be found at http://www.gnu.org/software/coreutils/. If you have a bug report for this manual page, see http://www.gnu.org/software/coreutils/. This page was obtained from the tarball coreutils-9.4.tar.xz fetched from http://ftp.gnu.org/gnu/coreutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU coreutils 9.4 August 2023 CHROOT(1) Pages that refer to this page: systemd-nspawn(1), chroot(2), lxc.container.conf(5), mount_namespaces(7), btrfs-receive(8), pivot_root(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # chroot\n\n> Run command or interactive shell with special root directory.\n> More information: <https://www.gnu.org/software/coreutils/chroot>.\n\n- Run command as new root directory:\n\n`chroot {{path/to/new/root}} {{command}}`\n\n- Use a specific user and group:\n\n`chroot --userspec={{username_or_id:group_name_or_id}}`\n |
chrt | chrt(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training chrt(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | POLICIES | SCHEDULING OPTIONS | OPTIONS | EXAMPLES | PERMISSIONS | NOTES | AUTHORS | SEE ALSO | REPORTING BUGS | AVAILABILITY CHRT(1) User Commands CHRT(1) NAME top chrt - manipulate the real-time attributes of a process SYNOPSIS top chrt [options] priority command argument ... chrt [options] -p [priority] PID DESCRIPTION top chrt sets or retrieves the real-time scheduling attributes of an existing PID, or runs command with the given attributes. POLICIES top -o, --other Set scheduling policy to SCHED_OTHER (time-sharing scheduling). This is the default Linux scheduling policy. -f, --fifo Set scheduling policy to SCHED_FIFO (first in-first out). -r, --rr Set scheduling policy to SCHED_RR (round-robin scheduling). When no policy is defined, the SCHED_RR is used as the default. -b, --batch Set scheduling policy to SCHED_BATCH (scheduling batch processes). Linux-specific, supported since 2.6.16. The priority argument has to be set to zero. -i, --idle Set scheduling policy to SCHED_IDLE (scheduling very low priority jobs). Linux-specific, supported since 2.6.23. The priority argument has to be set to zero. -d, --deadline Set scheduling policy to SCHED_DEADLINE (sporadic task model deadline scheduling). Linux-specific, supported since 3.14. The priority argument has to be set to zero. See also --sched-runtime, --sched-deadline and --sched-period. The relation between the options required by the kernel is runtime deadline period. chrt copies period to deadline if --sched-deadline is not specified and deadline to runtime if --sched-runtime is not specified. It means that at least --sched-period has to be specified. See sched(7) for more details. SCHEDULING OPTIONS top -T, --sched-runtime nanoseconds Specifies runtime parameter for SCHED_DEADLINE policy (Linux-specific). -P, --sched-period nanoseconds Specifies period parameter for SCHED_DEADLINE policy (Linux-specific). Note that the kernels lower limit is 100 milliseconds. -D, --sched-deadline nanoseconds Specifies deadline parameter for SCHED_DEADLINE policy (Linux-specific). -R, --reset-on-fork Use SCHED_RESET_ON_FORK or SCHED_FLAG_RESET_ON_FORK flag. Linux-specific, supported since 2.6.31. Each thread has a reset-on-fork scheduling flag. When this flag is set, children created by fork(2) do not inherit privileged scheduling policies. After the reset-on-fork flag has been enabled, it can be reset only if the thread has the CAP_SYS_NICE capability. This flag is disabled in child processes created by fork(2). More precisely, if the reset-on-fork flag is set, the following rules apply for subsequently created children: If the calling thread has a scheduling policy of SCHED_FIFO or SCHED_RR, the policy is reset to SCHED_OTHER in child processes. If the calling process has a negative nice value, the nice value is reset to zero in child processes. OPTIONS top -a, --all-tasks Set or retrieve the scheduling attributes of all the tasks (threads) for a given PID. -m, --max Show minimum and maximum valid priorities, then exit. -p, --pid Operate on an existing PID and do not launch a new task. -v, --verbose Show status information. -h, --help Display help text and exit. -V, --version Print version and exit. EXAMPLES top The default behavior is to run a new command: chrt priority command [arguments] You can also retrieve the real-time attributes of an existing task: chrt -p PID Or set them: chrt -r -p priority PID This, for example, sets real-time scheduling to priority 30 for the process PID with the SCHED_RR (round-robin) class: chrt -r -p 30 PID Reset priorities to default for a process: chrt -o -p 0 PID See sched(7) for a detailed discussion of the different scheduler classes and how they interact. PERMISSIONS top A user must possess CAP_SYS_NICE to change the scheduling attributes of a process. Any user can retrieve the scheduling information. NOTES top Only SCHED_FIFO, SCHED_OTHER and SCHED_RR are part of POSIX 1003.1b Process Scheduling. The other scheduling attributes may be ignored on some systems. Linux' default scheduling policy is SCHED_OTHER. AUTHORS top Robert Love <rml@tech9.net>, Karel Zak <kzak@redhat.com> SEE ALSO top nice(1), renice(1), taskset(1), sched(7) See sched_setscheduler(2) for a description of the Linux scheduling scheme. REPORTING BUGS top For bug reports, use the issue tracker at https://github.com/util-linux/util-linux/issues. AVAILABILITY top The chrt command is part of the util-linux package which can be downloaded from Linux Kernel Archive <https://www.kernel.org/pub/linux/utils/util-linux/>. This page is part of the util-linux (a random collection of Linux utilities) project. Information about the project can be found at https://www.kernel.org/pub/linux/utils/util-linux/. If you have a bug report for this manual page, send it to util-linux@vger.kernel.org. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/utils/util-linux/util-linux.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-14.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org util-linux 2.39.594-1e0ad 2023-08-25 CHRT(1) Pages that refer to this page: renice(1), taskset(1), sched_setattr(2), sched_setscheduler(2), sched(7) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # chrt\n\n> Manipulate the real-time attributes of a process.\n> More information: <https://man7.org/linux/man-pages/man1/chrt.1.html>.\n\n- Display attributes of a process:\n\n`chrt --pid {{PID}}`\n\n- Display attributes of all threads of a process:\n\n`chrt --all-tasks --pid {{PID}}`\n\n- Display the min/max priority values that can be used with `chrt`:\n\n`chrt --max`\n\n- Set the scheduling policy for a process:\n\n`chrt --pid {{PID}} --{{deadline|idle|batch|rr|fifo|other}}`\n |
chsh | chsh(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training Another version of this page is provided by the shadow-utils project chsh(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | VALID SHELLS | EXIT STATUS | AUTHORS | SEE ALSO | REPORTING BUGS | AVAILABILITY CHSH(1) User Commands CHSH(1) NAME top chsh - change your login shell SYNOPSIS top chsh [-s shell] [-l] [-h] [-V] [username] DESCRIPTION top chsh is used to change your login shell. If a shell is not given on the command line, chsh prompts for one. chsh supports non-local entries (kerberos, LDAP, etc.) if linked with libuser, otherwise use ypchsh(1), lchsh(1) or any other implementation for non-local entries. OPTIONS top -s, --shell shell Specify your login shell. -l, --list-shells Print the list of shells listed in /etc/shells and exit. -h, --help Display help text and exit. The short options -h have been used since version 2.30; old versions use deprecated -u. -V, --version Print version and exit. The short options -V have been used since version 2.39; old versions use deprecated -v. -h, --help Display help text and exit. -V, --version Print version and exit. VALID SHELLS top chsh will accept the full pathname of any executable file on the system. The default behavior for non-root users is to accept only shells listed in the /etc/shells file, and issue a warning for root user. It can also be configured at compile-time to only issue a warning for all users. EXIT STATUS top Returns 0 if operation was successful, 1 if operation failed or command syntax was not valid. AUTHORS top Salvatore Valente <svalente@mit.edu> SEE ALSO top login(1), login.defs(5), passwd(5), shells(5) REPORTING BUGS top For bug reports, use the issue tracker at https://github.com/util-linux/util-linux/issues. AVAILABILITY top The chsh command is part of the util-linux package which can be downloaded from Linux Kernel Archive <https://www.kernel.org/pub/linux/utils/util-linux/>. This page is part of the util-linux (a random collection of Linux utilities) project. Information about the project can be found at https://www.kernel.org/pub/linux/utils/util-linux/. If you have a bug report for this manual page, send it to util-linux@vger.kernel.org. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/utils/util-linux/util-linux.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-14.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org util-linux 2.39.594-1e0ad 2023-07-19 CHSH(1) Pages that refer to this page: chfn(1), chfn(1@@shadow-utils), intro(1), passwd(5), shells(5), groupadd(8), groupdel(8), groupmems(8), groupmod(8), useradd(8), userdel(8), usermod(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # chsh\n\n> Change user's login shell.\n> Part of `util-linux`.\n> More information: <https://manned.org/chsh>.\n\n- Set a specific login shell for the current user interactively:\n\n`sudo chsh`\n\n- Set a specific login [s]hell for the current user:\n\n`sudo chsh --shell {{path/to/shell}}`\n\n- Set a login [s]hell for a specific user:\n\n`sudo chsh --shell {{path/to/shell}} {{username}}`\n\n- [l]ist available shells:\n\n`sudo chsh --list-shells`\n |
cksum | cksum(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training cksum(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | AUTHOR | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON CKSUM(1) User Commands CKSUM(1) NAME top cksum - compute and verify file checksums SYNOPSIS top cksum [OPTION]... [FILE]... DESCRIPTION top Print or verify checksums. By default use the 32 bit CRC algorithm. With no FILE, or when FILE is -, read standard input. Mandatory arguments to long options are mandatory for short options too. -a, --algorithm=TYPE select the digest type to use. See DIGEST below. --base64 emit base64-encoded digests, not hexadecimal -c, --check read checksums from the FILEs and check them -l, --length=BITS digest length in bits; must not exceed the max for the blake2 algorithm and must be a multiple of 8 --raw emit a raw binary digest, not hexadecimal --tag create a BSD-style checksum (the default) --untagged create a reversed style checksum, without digest type -z, --zero end each output line with NUL, not newline, and disable file name escaping The following five options are useful only when verifying checksums: --ignore-missing don't fail or report status for missing files --quiet don't print OK for each successfully verified file --status don't output anything, status code shows success --strict exit non-zero for improperly formatted checksum lines -w, --warn warn about improperly formatted checksum lines --debug indicate which implementation used --help display this help and exit --version output version information and exit DIGEST determines the digest algorithm and default output format: sysv (equivalent to sum -s) bsd (equivalent to sum -r) crc (equivalent to cksum) md5 (equivalent to md5sum) sha1 (equivalent to sha1sum) sha224 (equivalent to sha224sum) sha256 (equivalent to sha256sum) sha384 (equivalent to sha384sum) sha512 (equivalent to sha512sum) blake2b (equivalent to b2sum) sm3 (only available through cksum) When checking, the input should be a former output of this program, or equivalent standalone program. AUTHOR top Written by Padraig Brady and Q. Frank Xia. REPORTING BUGS top GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT top Copyright 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top Full documentation <https://www.gnu.org/software/coreutils/cksum> or available locally via: info '(coreutils) cksum invocation' COLOPHON top This page is part of the coreutils (basic file, shell and text manipulation utilities) project. Information about the project can be found at http://www.gnu.org/software/coreutils/. If you have a bug report for this manual page, see http://www.gnu.org/software/coreutils/. This page was obtained from the tarball coreutils-9.4.tar.xz fetched from http://ftp.gnu.org/gnu/coreutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU coreutils 9.4 August 2023 CKSUM(1) Pages that refer to this page: b2sum(1), md5sum(1), sha1sum(1), sha224sum(1), sha256sum(1), sha384sum(1), sha512sum(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # cksum\n\n> Calculate CRC checksums and byte counts of a file.\n> Note: on old UNIX systems the CRC implementation may differ.\n> More information: <https://www.gnu.org/software/coreutils/cksum>.\n\n- Display a 32-bit checksum, size in bytes and filename:\n\n`cksum {{path/to/file}}`\n |
clear | clear(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training clear(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | HISTORY | PORTABILITY | SEE ALSO | COLOPHON @CLEAR@(1) General Commands Manual @CLEAR@(1) NAME top @CLEAR@ - clear the terminal screen SYNOPSIS top @CLEAR@ [-Ttype] [-V] [-x] DESCRIPTION top @CLEAR@ clears your terminal's screen if this is possible, including the terminal's scrollback buffer (if the extended E3 capability is defined). @CLEAR@ looks in the environment for the terminal type given by the environment variable TERM, and then in the terminfo database to determine how to clear the screen. @CLEAR@ writes to the standard output. You can redirect the standard output to a file (which prevents @CLEAR@ from actually clearing the screen), and later cat the file to the screen, clearing it at that point. OPTIONS top -T type indicates the type of terminal. Normally this option is unnecessary, because the default is taken from the environment variable TERM. If -T is specified, then the shell variables LINES and COLUMNS will also be ignored. -V reports the version of ncurses which was used in this program, and exits. The options are as follows: -x do not attempt to clear the terminal's scrollback buffer using the extended E3 capability. HISTORY top A clear command appeared in 2.79BSD dated February 24, 1979. Later that was provided in Unix 8th edition (1985). AT&T adapted a different BSD program (tset) to make a new command (tput), and used this to replace the clear command with a shell script which calls tput clear, e.g., /usr/bin/tput ${1:+-T$1} clear 2> /dev/null exit In 1989, when Keith Bostic revised the BSD tput command to make it similar to the AT&T tput, he added a shell script for the clear command: exec tput clear The remainder of the script in each case is a copyright notice. The ncurses clear command began in 1995 by adapting the original BSD clear command (with terminfo, of course). The E3 extension came later: In June 1999, xterm provided an extension to the standard control sequence for clearing the screen. Rather than clearing just the visible part of the screen using printf '\033[2J' one could clear the scrollback using printf '\033[3J' This is documented in XTerm Control Sequences as a feature originating with xterm. A few other terminal developers adopted the feature, e.g., PuTTY in 2006. In April 2011, a Red Hat developer submitted a patch to the Linux kernel, modifying its console driver to do the same thing. The Linux change, part of the 3.0 release, did not mention xterm, although it was cited in the Red Hat bug report (#683733) which led to the change. Again, a few other terminal developers adopted the feature. But the next relevant step was a change to the clear program in 2013 to incorporate this extension. In 2013, the E3 extension was overlooked in @TPUT@ with the clear parameter. That was addressed in 2016 by reorganizing @TPUT@ to share its logic with @CLEAR@ and @TSET@. PORTABILITY top Neither IEEE Std 1003.1/The Open Group Base Specifications Issue 7 (POSIX.1-2008) nor X/Open Curses Issue 7 documents @TSET@ or @RESET@. The latter documents tput, which could be used to replace this utility either via a shell script or by an alias (such as a symbolic link) to run @TPUT@ as @CLEAR@. SEE ALSO top @TPUT@(1), terminfo(5), xterm(1). This describes ncurses version @NCURSES_MAJOR@.@NCURSES_MINOR@ (patch @NCURSES_PATCH@). COLOPHON top This page is part of the ncurses (new curses) project. Information about the project can be found at https://www.gnu.org/software/ncurses/ncurses.html. If you have a bug report for this manual page, send it to bug-ncurses-request@gnu.org. This page was obtained from the project's upstream Git mirror of the CVS repository https://github.com/mirror/ncurses.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-03-12.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org @CLEAR@(1) Pages that refer to this page: setterm(1), user_caps(5) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # clear\n\n> Clears the screen of the terminal.\n> More information: <https://manned.org/clear>.\n\n- Clear the screen (equivalent to pressing Control-L in Bash shell):\n\n`clear`\n\n- Clear the screen but keep the terminal's scrollback buffer:\n\n`clear -x`\n\n- Indicate the type of terminal to clean (defaults to the value of the environment variable `TERM`):\n\n`clear -T {{type_of_terminal}}`\n\n- Display the version of `ncurses` used by `clear`:\n\n`clear -V`\n |
cmp | cmp(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training cmp(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | AUTHOR | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON CMP(1) User Commands CMP(1) NAME top cmp - compare two files byte by byte SYNOPSIS top cmp [OPTION]... FILE1 [FILE2 [SKIP1 [SKIP2]]] DESCRIPTION top Compare two files byte by byte. The optional SKIP1 and SKIP2 specify the number of bytes to skip at the beginning of each file (zero by default). Mandatory arguments to long options are mandatory for short options too. -b, --print-bytes print differing bytes -i, --ignore-initial=SKIP skip first SKIP bytes of both inputs -i, --ignore-initial=SKIP1:SKIP2 skip first SKIP1 bytes of FILE1 and first SKIP2 bytes of FILE2 -l, --verbose output byte numbers and differing byte values -n, --bytes=LIMIT compare at most LIMIT bytes -s, --quiet, --silent suppress all normal output --help display this help and exit -v, --version output version information and exit SKIP values may be followed by the following multiplicative suffixes: kB 1000, K 1024, MB 1,000,000, M 1,048,576, GB 1,000,000,000, G 1,073,741,824, and so on for T, P, E, Z, Y. If a FILE is '-' or missing, read standard input. Exit status is 0 if inputs are the same, 1 if different, 2 if trouble. AUTHOR top Written by Torbjorn Granlund and David MacKenzie. REPORTING BUGS top Report bugs to: bug-diffutils@gnu.org GNU diffutils home page: <https://www.gnu.org/software/diffutils/> General help using GNU software: <https://www.gnu.org/gethelp/> COPYRIGHT top Copyright 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top diff(1), diff3(1), sdiff(1) The full documentation for cmp is maintained as a Texinfo manual. If the info and cmp programs are properly installed at your site, the command info cmp should give you access to the complete manual. COLOPHON top This page is part of the diffutils (GNU diff utilities) project. Information about the project can be found at http://savannah.gnu.org/projects/diffutils/. If you have a bug report for this manual page, send it to bug-diffutils@gnu.org. This page was obtained from the project's upstream Git repository git://git.savannah.gnu.org/diffutils.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-09-20.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org diffutils 3.10.207-774b December 2023 CMP(1) Pages that refer to this page: diff(1), diff3(1), grep(1), sdiff(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # cmp\n\n> Compare two files byte by byte.\n> More information: <https://www.gnu.org/software/diffutils/manual/html_node/Invoking-cmp.html>.\n\n- Output char and line number of the first difference between two files:\n\n`cmp {{path/to/file1}} {{path/to/file2}}`\n\n- Output info of the first difference: char, line number, bytes, and values:\n\n`cmp --print-bytes {{path/to/file1}} {{path/to/file2}}`\n\n- Output the byte numbers and values of every difference:\n\n`cmp --verbose {{path/to/file1}} {{path/to/file2}}`\n\n- Compare files but output nothing, yield only the exit status:\n\n`cmp --quiet {{path/to/file1}} {{path/to/file2}}`\n |
colon | colon(1p) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training colon(1p) Linux manual page PROLOG | NAME | SYNOPSIS | DESCRIPTION | OPTIONS | OPERANDS | STDIN | INPUT FILES | ENVIRONMENT VARIABLES | ASYNCHRONOUS EVENTS | STDOUT | STDERR | OUTPUT FILES | EXTENDED DESCRIPTION | EXIT STATUS | CONSEQUENCES OF ERRORS | APPLICATION USAGE | EXAMPLES | RATIONALE | FUTURE DIRECTIONS | SEE ALSO | COPYRIGHT COLON(1P) POSIX Programmer's Manual COLON(1P) PROLOG top This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux. NAME top colon null utility SYNOPSIS top : [argument...] DESCRIPTION top This utility shall only expand command arguments. It is used when a command is needed, as in the then condition of an if command, but nothing is to be done by the command. OPTIONS top None. OPERANDS top See the DESCRIPTION. STDIN top Not used. INPUT FILES top None. ENVIRONMENT VARIABLES top None. ASYNCHRONOUS EVENTS top Default. STDOUT top Not used. STDERR top The standard error shall be used only for diagnostic messages. OUTPUT FILES top None. EXTENDED DESCRIPTION top None. EXIT STATUS top Zero. CONSEQUENCES OF ERRORS top Default. The following sections are informative. APPLICATION USAGE top None. EXAMPLES top : ${X=abc} if false then : else echo $X fi abc As with any of the special built-ins, the null utility can also have variable assignments and redirections associated with it, such as: x=y : > z which sets variable x to the value y (so that it persists after the null utility completes) and creates or truncates file z. RATIONALE top None. FUTURE DIRECTIONS top None. SEE ALSO top Section 2.14, Special Built-In Utilities COPYRIGHT top Portions of this text are reprinted and reproduced in electronic form from IEEE Std 1003.1-2017, Standard for Information Technology -- Portable Operating System Interface (POSIX), The Open Group Base Specifications Issue 7, 2018 Edition, Copyright (C) 2018 by the Institute of Electrical and Electronics Engineers, Inc and The Open Group. In the event of any discrepancy between this version and the original IEEE and The Open Group Standard, the original IEEE and The Open Group Standard is the referee document. The original Standard can be obtained online at http://www.opengroup.org/unix/online.html . Any typographical or formatting errors that appear in this page are most likely to have been introduced during the conversion of the source files to man page format. To report such errors, see https://www.kernel.org/doc/man-pages/reporting_bugs.html . IEEE/The Open Group 2017 COLON(1P) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # colon\n\n> Returns a successful exit status code of 0.\n> More information: <https://pubs.opengroup.org/onlinepubs/9699919799/utilities/V3_chap02.html#colon>.\n\n- Return a successful exit code:\n\n`:`\n\n- Make a command always exit with 0:\n\n`{{command}} || :`\n |
colrm | colrm(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training colrm(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | HISTORY | SEE ALSO | REPORTING BUGS | AVAILABILITY COLRM(1) User Commands COLRM(1) NAME top colrm - remove columns from a file SYNOPSIS top colrm [first [last]] DESCRIPTION top colrm removes selected columns from a file. Input is taken from standard input. Output is sent to standard output. If called with one parameter the columns of each line will be removed starting with the specified first column. If called with two parameters the columns from the first column to the last column will be removed. Column numbering starts with column 1. OPTIONS top -h, --help Display help text and exit. -V, --version Print version and exit. HISTORY top The colrm command appeared in 3.0BSD. SEE ALSO top awk(1p), column(1), expand(1), paste(1) REPORTING BUGS top For bug reports, use the issue tracker at https://github.com/util-linux/util-linux/issues. AVAILABILITY top The colrm command is part of the util-linux package which can be downloaded from Linux Kernel Archive <https://www.kernel.org/pub/linux/utils/util-linux/>. This page is part of the util-linux (a random collection of Linux utilities) project. Information about the project can be found at https://www.kernel.org/pub/linux/utils/util-linux/. If you have a bug report for this manual page, send it to util-linux@vger.kernel.org. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/utils/util-linux/util-linux.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-14.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org util-linux 2.39.594-1e0ad 2023-07-19 COLRM(1) Pages that refer to this page: column(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # colrm\n\n> Remove columns from `stdin`.\n> More information: <https://manned.org/colrm>.\n\n- Remove first column of `stdin`:\n\n`colrm {{1 1}}`\n\n- Remove from 3rd column till the end of each line:\n\n`colrm {{3}}`\n\n- Remove from the 3rd column till the 5th column of each line:\n\n`colrm {{3 5}}`\n |
column | column(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training column(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | ENVIRONMENT | HISTORY | BUGS | EXAMPLES | SEE ALSO | REPORTING BUGS | AVAILABILITY COLUMN(1) User Commands COLUMN(1) NAME top column - columnate lists SYNOPSIS top column [options] [file ...] DESCRIPTION top The column utility formats its input into multiple columns. The util support three modes: columns are filled before rows This is the default mode (required by backward compatibility). rows are filled before columns This mode is enabled by option -x, --fillrows table Determine the number of columns the input contains and create a table. This mode is enabled by option -t, --table and columns formatting is possible to modify by --table-* options. Use this mode if not sure. The output is aligned to the terminal width in interactive mode and the 80 columns in non-interactive mode (see --output-width for more details). Input is taken from file, or otherwise from standard input. Empty lines are ignored and all invalid multibyte sequences are encoded by x<hex> convention. OPTIONS top The argument columns for --table-* options is a comma separated list of the column names as defined by --table-columns, or names defined by --table-column or its column number in order as specified by input. Its possible to mix names and numbers. The special placeholder '0' (e.g. -R0) may be used to specify all columns and '-1' (e.g. -R -1) to specify the last visible column. Its possible to use ranges like '1-5' when addressing columns by numbers. -J, --json Use JSON output format to print the table, the option --table-columns is required and the option --table-name is recommended. -c, --output-width width Output is formatted to a width specified as number of characters. The original name of this option is --columns; this name is deprecated since v2.30. Note that input longer than width is not truncated by default. The default is a terminal width and the 80 columns in non-interactive mode. The column headers are never truncated. The placeholder "unlimited" (or 0) is possible to use to not restrict output width. This is recommended for example when output to the files rather than on terminal. -d, --table-noheadings Do not print header. This option allows the use of logical column names on the command line, but keeps the header hidden when printing the table. -o, --output-separator string Specify the columns delimiter for table output (default is two spaces). -s, --separator separators Specify the possible input item delimiters (default is whitespace). -t, --table Determine the number of columns the input contains and create a table. Columns are delimited with whitespace, by default, or with the characters supplied using the --output-separator option. Table output is useful for pretty-printing. -C, --table-column properties Define one column by comma separated list of column attributes. This option can be used more than once, every use defines just one column. The properties replace some of --table- options. For example --table-column name=FOO,right define one column where text is aligned to right. The option is mutually exclusive to --table-columns. The currently supported attributes are: name=string Specifies column name. trunc The column text can be truncated when necessary. The same as --table-truncate. right Right align text in the specified columns. The same as --table-right. width=number Specifies column width. The width is used as a hint only. The width is strictly followed only when strictwidth attribute is used too. strictwidth Strictly follow column width= setting. noextreme Specify columns where is possible to ignore unusually long cells. See --table-noextreme for more details. wrap Specify columns where is possible to use multi-line cell for long text when necessary. See --table-wrap. hide Dont print specified columns. See --table-hide. json=type Define column type for JSON output, Supported are string, number and boolean. -N, --table-columns names Specify the columns names by comma separated list of names. The names are used for the table header or to address column in option argument. See also --table-column. -l, --table-columns-limit number Specify maximal number of the input columns. The last column will contain all remaining line data if the limit is smaller than the number of the columns in the input data. -R, --table-right columns Right align text in the specified columns. -T, --table-truncate columns Specify columns where text can be truncated when necessary, otherwise very long table entries may be printed on multiple lines. -E, --table-noextreme columns Specify columns where is possible to ignore unusually long (longer than average) cells when calculate column width. The option has impact to the width calculation and table formatting, but the printed text is not affected. The option is used for the last visible column by default. -e, --table-header-repeat Print header line for each page. -W, --table-wrap columns Specify columns where is possible to use multi-line cell for long text when necessary. -H, --table-hide columns Dont print specified columns. The special placeholder '-' may be used to hide all unnamed columns (see --table-columns). -O, --table-order columns Specify columns order on output. -n, --table-name name Specify the table name used for JSON output. The default is "table". -m, --table-maxout Fill all available space on output. -L, --keep-empty-lines Preserve whitespace-only lines in the input. The default is ignore empty lines at all. This options original name was --table-empty-lines but is now deprecated because it gives the false impression that the option only applies to table mode. -r, --tree column Specify column to use tree-like output. Note that the circular dependencies and other anomalies in child and parent relation are silently ignored. -i, --tree-id column Specify column with line ID to create child-parent relation. -p, --tree-parent column Specify column with parent ID to create child-parent relation. -x, --fillrows Fill rows before filling columns. -h, --help Display help text and exit. -V, --version Print version and exit. ENVIRONMENT top The environment variable COLUMNS is used to determine the size of the screen if no other information is available. HISTORY top The column command appeared in 4.3BSD-Reno. BUGS top Version 2.23 changed the -s option to be non-greedy, for example: printf "a:b:c\n1::3\n" | column -t -s ':' Old output: a b c 1 3 New output (since util-linux 2.23): a b c 1 3 Historical versions of this tool indicated that "rows are filled before columns" by default, and that the -x option reverses this. This wording did not reflect the actual behavior, and it has since been corrected (see above). Other implementations of column may continue to use the older documentation, but the behavior should be identical in any case. EXAMPLES top Print fstab with header line and align number to the right: sed 's/#.*//' /etc/fstab | column --table --table-columns SOURCE,TARGET,TYPE,OPTIONS,PASS,FREQ --table-right PASS,FREQ Print fstab and hide unnamed columns: sed 's/#.*//' /etc/fstab | column --table --table-columns SOURCE,TARGET,TYPE --table-hide - Print a tree: echo -e '1 0 A\n2 1 AA\n3 1 AB\n4 2 AAA\n5 2 AAB' | column --tree-id 1 --tree-parent 2 --tree 3 1 0 A 2 1 |-AA 4 2 | |-AAA 5 2 | `-AAB 3 1 `-AB SEE ALSO top colrm(1), ls(1), paste(1), sort(1) REPORTING BUGS top For bug reports, use the issue tracker at https://github.com/util-linux/util-linux/issues. AVAILABILITY top The column command is part of the util-linux package which can be downloaded from Linux Kernel Archive <https://www.kernel.org/pub/linux/utils/util-linux/>. This page is part of the util-linux (a random collection of Linux utilities) project. Information about the project can be found at https://www.kernel.org/pub/linux/utils/util-linux/. If you have a bug report for this manual page, send it to util-linux@vger.kernel.org. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/utils/util-linux/util-linux.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-14.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org util-linux 2.39.594-1e0ad 2023-07-19 COLUMN(1) Pages that refer to this page: colrm(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # column\n\n> Format `stdin` or a file into multiple columns.\n> Columns are filled before rows; the default separator is a whitespace.\n> More information: <https://manned.org/column>.\n\n- Format the output of a command for a 30 characters wide display:\n\n`printf "header1 header2\nbar foo\n" | column --output-width {{30}}`\n\n- Split columns automatically and auto-align them in a tabular format:\n\n`printf "header1 header2\nbar foo\n" | column --table`\n\n- Specify the column delimiter character for the `--table` option (e.g. "," for CSV) (defaults to whitespace):\n\n`printf "header1,header2\nbar,foo\n" | column --table --separator {{,}}`\n\n- Fill rows before filling columns:\n\n`printf "header1\nbar\nfoobar\n" | column --output-width {{30}} --fillrows`\n |
comm | comm(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training comm(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | EXAMPLES | AUTHOR | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON COMM(1) User Commands COMM(1) NAME top comm - compare two sorted files line by line SYNOPSIS top comm [OPTION]... FILE1 FILE2 DESCRIPTION top Compare sorted files FILE1 and FILE2 line by line. When FILE1 or FILE2 (not both) is -, read standard input. With no options, produce three-column output. Column one contains lines unique to FILE1, column two contains lines unique to FILE2, and column three contains lines common to both files. -1 suppress column 1 (lines unique to FILE1) -2 suppress column 2 (lines unique to FILE2) -3 suppress column 3 (lines that appear in both files) --check-order check that the input is correctly sorted, even if all input lines are pairable --nocheck-order do not check that the input is correctly sorted --output-delimiter=STR separate columns with STR --total output a summary -z, --zero-terminated line delimiter is NUL, not newline --help display this help and exit --version output version information and exit Note, comparisons honor the rules specified by 'LC_COLLATE'. EXAMPLES top comm -12 file1 file2 Print only lines present in both file1 and file2. comm -3 file1 file2 Print lines in file1 not in file2, and vice versa. AUTHOR top Written by Richard M. Stallman and David MacKenzie. REPORTING BUGS top GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT top Copyright 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top join(1), uniq(1) Full documentation <https://www.gnu.org/software/coreutils/comm> or available locally via: info '(coreutils) comm invocation' COLOPHON top This page is part of the coreutils (basic file, shell and text manipulation utilities) project. Information about the project can be found at http://www.gnu.org/software/coreutils/. If you have a bug report for this manual page, see http://www.gnu.org/software/coreutils/. This page was obtained from the tarball coreutils-9.4.tar.xz fetched from http://ftp.gnu.org/gnu/coreutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU coreutils 9.4 August 2023 COMM(1) Pages that refer to this page: join(1), uniq(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # comm\n\n> Select or reject lines common to two files. Both files must be sorted.\n> More information: <https://www.gnu.org/software/coreutils/comm>.\n\n- Produce three tab-separated columns: lines only in first file, lines only in second file and common lines:\n\n`comm {{file1}} {{file2}}`\n\n- Print only lines common to both files:\n\n`comm -12 {{file1}} {{file2}}`\n\n- Print only lines common to both files, reading one file from `stdin`:\n\n`cat {{file1}} | comm -12 - {{file2}}`\n\n- Get lines only found in first file, saving the result to a third file:\n\n`comm -23 {{file1}} {{file2}} > {{file1_only}}`\n\n- Print lines only found in second file, when the files aren't sorted:\n\n`comm -13 <(sort {{file1}}) <(sort {{file2}})`\n |
command | command(1p) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training command(1p) Linux manual page PROLOG | NAME | SYNOPSIS | DESCRIPTION | OPTIONS | OPERANDS | STDIN | INPUT FILES | ENVIRONMENT VARIABLES | ASYNCHRONOUS EVENTS | STDOUT | STDERR | OUTPUT FILES | EXTENDED DESCRIPTION | EXIT STATUS | CONSEQUENCES OF ERRORS | APPLICATION USAGE | EXAMPLES | RATIONALE | FUTURE DIRECTIONS | SEE ALSO | COPYRIGHT COMMAND(1P) POSIX Programmer's Manual COMMAND(1P) PROLOG top This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux. NAME top command execute a simple command SYNOPSIS top command [-p] command_name [argument...] command [-p][-v|-V] command_name DESCRIPTION top The command utility shall cause the shell to treat the arguments as a simple command, suppressing the shell function lookup that is described in Section 2.9.1.1, Command Search and Execution, item 1b. If the command_name is the same as the name of one of the special built-in utilities, the special properties in the enumerated list at the beginning of Section 2.14, Special Built-In Utilities shall not occur. In every other respect, if command_name is not the name of a function, the effect of command (with no options) shall be the same as omitting command. When the -v or -V option is used, the command utility shall provide information concerning how a command name is interpreted by the shell. OPTIONS top The command utility shall conform to the Base Definitions volume of POSIX.12017, Section 12.2, Utility Syntax Guidelines. The following options shall be supported: -p Perform the command search using a default value for PATH that is guaranteed to find all of the standard utilities. -v Write a string to standard output that indicates the pathname or command that will be used by the shell, in the current shell execution environment (see Section 2.12, Shell Execution Environment), to invoke command_name, but do not invoke command_name. * Utilities, regular built-in utilities, command_names including a <slash> character, and any implementation-defined functions that are found using the PATH variable (as described in Section 2.9.1.1, Command Search and Execution), shall be written as absolute pathnames. * Shell functions, special built-in utilities, regular built-in utilities not associated with a PATH search, and shell reserved words shall be written as just their names. * An alias shall be written as a command line that represents its alias definition. * Otherwise, no output shall be written and the exit status shall reflect that the name was not found. -V Write a string to standard output that indicates how the name given in the command_name operand will be interpreted by the shell, in the current shell execution environment (see Section 2.12, Shell Execution Environment), but do not invoke command_name. Although the format of this string is unspecified, it shall indicate in which of the following categories command_name falls and shall include the information stated: * Utilities, regular built-in utilities, and any implementation-defined functions that are found using the PATH variable (as described in Section 2.9.1.1, Command Search and Execution), shall be identified as such and include the absolute pathname in the string. * Other shell functions shall be identified as functions. * Aliases shall be identified as aliases and their definitions included in the string. * Special built-in utilities shall be identified as special built-in utilities. * Regular built-in utilities not associated with a PATH search shall be identified as regular built-in utilities. (The term ``regular'' need not be used.) * Shell reserved words shall be identified as reserved words. OPERANDS top The following operands shall be supported: argument One of the strings treated as an argument to command_name. command_name The name of a utility or a special built-in utility. STDIN top Not used. INPUT FILES top None. ENVIRONMENT VARIABLES top The following environment variables shall affect the execution of command: LANG Provide a default value for the internationalization variables that are unset or null. (See the Base Definitions volume of POSIX.12017, Section 8.2, Internationalization Variables for the precedence of internationalization variables used to determine the values of locale categories.) LC_ALL If set to a non-empty string value, override the values of all the other internationalization variables. LC_CTYPE Determine the locale for the interpretation of sequences of bytes of text data as characters (for example, single-byte as opposed to multi-byte characters in arguments). LC_MESSAGES Determine the locale that should be used to affect the format and contents of diagnostic messages written to standard error and informative messages written to standard output. NLSPATH Determine the location of message catalogs for the processing of LC_MESSAGES. PATH Determine the search path used during the command search described in Section 2.9.1.1, Command Search and Execution, except as described under the -p option. ASYNCHRONOUS EVENTS top Default. STDOUT top When the -v option is specified, standard output shall be formatted as: "%s\n", <pathname or command> When the -V option is specified, standard output shall be formatted as: "%s\n", <unspecified> STDERR top The standard error shall be used only for diagnostic messages. OUTPUT FILES top None. EXTENDED DESCRIPTION top None. EXIT STATUS top When the -v or -V options are specified, the following exit values shall be returned: 0 Successful completion. >0 The command_name could not be found or an error occurred. Otherwise, the following exit values shall be returned: 126 The utility specified by command_name was found but could not be invoked. 127 An error occurred in the command utility or the utility specified by command_name could not be found. Otherwise, the exit status of command shall be that of the simple command specified by the arguments to command. CONSEQUENCES OF ERRORS top Default. The following sections are informative. APPLICATION USAGE top The order for command search allows functions to override regular built-ins and path searches. This utility is necessary to allow functions that have the same name as a utility to call the utility (instead of a recursive call to the function). The system default path is available using getconf; however, since getconf may need to have the PATH set up before it can be called itself, the following can be used: command -p getconf PATH There are some advantages to suppressing the special characteristics of special built-ins on occasion. For example: command exec > unwritable-file does not cause a non-interactive script to abort, so that the output status can be checked by the script. The command, env, nohup, time, and xargs utilities have been specified to use exit code 127 if an error occurs so that applications can distinguish ``failure to find a utility'' from ``invoked utility exited with an error indication''. The value 127 was chosen because it is not commonly used for other meanings; most utilities use small values for ``normal error conditions'' and the values above 128 can be confused with termination due to receipt of a signal. The value 126 was chosen in a similar manner to indicate that the utility could be found, but not invoked. Some scripts produce meaningful error messages differentiating the 126 and 127 cases. The distinction between exit codes 126 and 127 is based on KornShell practice that uses 127 when all attempts to exec the utility fail with [ENOENT], and uses 126 when any attempt to exec the utility fails for any other reason. Since the -v and -V options of command produce output in relation to the current shell execution environment, command is generally provided as a shell regular built-in. If it is called in a subshell or separate utility execution environment, such as one of the following: (PATH=foo command -v) nohup command -v it does not necessarily produce correct results. For example, when called with nohup or an exec function, in a separate utility execution environment, most implementations are not able to identify aliases, functions, or special built-ins. Two types of regular built-ins could be encountered on a system and these are described separately by command. The description of command search in Section 2.9.1.1, Command Search and Execution allows for a standard utility to be implemented as a regular built-in as long as it is found in the appropriate place in a PATH search. So, for example, command -v true might yield /bin/true or some similar pathname. Other implementation-defined utilities that are not defined by this volume of POSIX.12017 might exist only as built-ins and have no pathname associated with them. These produce output identified as (regular) built- ins. Applications encountering these are not able to count on execing them, using them with nohup, overriding them with a different PATH, and so on. EXAMPLES top 1. Make a version of cd that always prints out the new working directory exactly once: cd() { command cd "$@" >/dev/null pwd } 2. Start off a ``secure shell script'' in which the script avoids being spoofed by its parent: IFS=' ' # The preceding value should be <space><tab><newline>. # Set IFS to its default value. \unalias -a # Unset all possible aliases. # Note that unalias is escaped to prevent an alias # being used for unalias. unset -f command # Ensure command is not a user function. PATH="$(command -p getconf PATH):$PATH" # Put on a reliable PATH prefix. # ... At this point, given correct permissions on the directories called by PATH, the script has the ability to ensure that any utility it calls is the intended one. It is being very cautious because it assumes that implementation extensions may be present that would allow user functions to exist when it is invoked; this capability is not specified by this volume of POSIX.12017, but it is not prohibited as an extension. For example, the ENV variable precedes the invocation of the script with a user start-up script. Such a script could define functions to spoof the application. RATIONALE top Since command is a regular built-in utility it is always found prior to the PATH search. There is nothing in the description of command that implies the command line is parsed any differently from that of any other simple command. For example: command a | b ; c is not parsed in any special way that causes '|' or ';' to be treated other than a pipe operator or <semicolon> or that prevents function lookup on b or c. The command utility is somewhat similar to the Eighth Edition shell builtin command, but since command also goes to the file system to search for utilities, the name builtin would not be intuitive. The command utility is most likely to be provided as a regular built-in. It is not listed as a special built-in for the following reasons: * The removal of exportable functions made the special precedence of a special built-in unnecessary. * A special built-in has special properties (see Section 2.14, Special Built-In Utilities) that were inappropriate for invoking other utilities. For example, two commands such as: date > unwritable-file command date > unwritable-file would have entirely different results; in a non-interactive script, the former would continue to execute the next command, the latter would abort. Introducing this semantic difference along with suppressing functions was seen to be non-intuitive. The -p option is present because it is useful to be able to ensure a safe path search that finds all the standard utilities. This search might not be identical to the one that occurs through one of the exec functions (as defined in the System Interfaces volume of POSIX.12017) when PATH is unset. At the very least, this feature is required to allow the script to access the correct version of getconf so that the value of the default path can be accurately retrieved. The command -v and -V options were added to satisfy requirements from users that are currently accomplished by three different historical utilities: type in the System V shell, whence in the KornShell, and which in the C shell. Since there is no historical agreement on how and what to accomplish here, the POSIX command utility was enhanced and the historical utilities were left unmodified. The C shell which merely conducts a path search. The KornShell whence is more elaboratein addition to the categories required by POSIX, it also reports on tracked aliases, exported aliases, and undefined functions. The output format of -V was left mostly unspecified because human users are its only audience. Applications should not be written to care about this information; they can use the output of -v to differentiate between various types of commands, but the additional information that may be emitted by the more verbose -V is not needed and should not be arbitrarily constrained in its verbosity or localization for application parsing reasons. FUTURE DIRECTIONS top None. SEE ALSO top Section 2.9.1.1, Command Search and Execution, Section 2.12, Shell Execution Environment, Section 2.14, Special Built-In Utilities, sh(1p), type(1p) The Base Definitions volume of POSIX.12017, Chapter 8, Environment Variables, Section 12.2, Utility Syntax Guidelines The System Interfaces volume of POSIX.12017, exec(1p) COPYRIGHT top Portions of this text are reprinted and reproduced in electronic form from IEEE Std 1003.1-2017, Standard for Information Technology -- Portable Operating System Interface (POSIX), The Open Group Base Specifications Issue 7, 2018 Edition, Copyright (C) 2018 by the Institute of Electrical and Electronics Engineers, Inc and The Open Group. In the event of any discrepancy between this version and the original IEEE and The Open Group Standard, the original IEEE and The Open Group Standard is the referee document. The original Standard can be obtained online at http://www.opengroup.org/unix/online.html . Any typographical or formatting errors that appear in this page are most likely to have been introduced during the conversion of the source files to man page format. To report such errors, see https://www.kernel.org/doc/man-pages/reporting_bugs.html . IEEE/The Open Group 2017 COMMAND(1P) Pages that refer to this page: type(1p) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # command\n\n> Command forces the shell to execute the program and ignore any functions, builtins and aliases with the same name.\n> More information: <https://manned.org/command>.\n\n- Execute the `ls` program literally, even if an `ls` alias exists:\n\n`command {{ls}}`\n\n- Display the path to the executable or the alias definition of a specific command:\n\n`command -v {{command_name}}`\n |
compress | compress(1p) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training compress(1p) Linux manual page PROLOG | NAME | SYNOPSIS | DESCRIPTION | OPTIONS | OPERANDS | STDIN | INPUT FILES | ENVIRONMENT VARIABLES | ASYNCHRONOUS EVENTS | STDOUT | STDERR | OUTPUT FILES | EXTENDED DESCRIPTION | EXIT STATUS | CONSEQUENCES OF ERRORS | APPLICATION USAGE | EXAMPLES | RATIONALE | FUTURE DIRECTIONS | SEE ALSO | COPYRIGHT COMPRESS(1P) POSIX Programmer's Manual COMPRESS(1P) PROLOG top This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux. NAME top compress compress data SYNOPSIS top compress [-fv] [-b bits] [file...] compress [-cfv] [-b bits] [file] DESCRIPTION top The compress utility shall attempt to reduce the size of the named files by using adaptive Lempel-Ziv coding algorithm. Note: Lempel-Ziv is US Patent 4464650, issued to William Eastman, Abraham Lempel, Jacob Ziv, Martin Cohn on August 7th, 1984, and assigned to Sperry Corporation. Lempel-Ziv-Welch compression is covered by US Patent 4558302, issued to Terry A. Welch on December 10th, 1985, and assigned to Sperry Corporation. On systems not supporting adaptive Lempel-Ziv coding algorithm, the input files shall not be changed and an error value greater than two shall be returned. Except when the output is to the standard output, each file shall be replaced by one with the extension .Z. If the invoking process has appropriate privileges, the ownership, modes, access time, and modification time of the original file are preserved. If appending the .Z to the filename would make the name exceed {NAME_MAX} bytes, the command shall fail. If no files are specified, the standard input shall be compressed to the standard output. OPTIONS top The compress utility shall conform to the Base Definitions volume of POSIX.12017, Section 12.2, Utility Syntax Guidelines. The following options shall be supported: -b bits Specify the maximum number of bits to use in a code. For a conforming application, the bits argument shall be: 9 <= bits <= 14 The implementation may allow bits values of greater than 14. The default is 14, 15, or 16. -c Cause compress to write to the standard output; the input file is not changed, and no .Z files are created. -f Force compression of file, even if it does not actually reduce the size of the file, or if the corresponding file.Z file already exists. If the -f option is not given, and the process is not running in the background, the user is prompted as to whether an existing file.Z file should be overwritten. If the response is affirmative, the existing file will be overwritten. -v Write the percentage reduction of each file to standard error. OPERANDS top The following operand shall be supported: file A pathname of a file to be compressed. STDIN top The standard input shall be used only if no file operands are specified, or if a file operand is '-'. INPUT FILES top If file operands are specified, the input files contain the data to be compressed. ENVIRONMENT VARIABLES top The following environment variables shall affect the execution of compress: LANG Provide a default value for the internationalization variables that are unset or null. (See the Base Definitions volume of POSIX.12017, Section 8.2, Internationalization Variables for the precedence of internationalization variables used to determine the values of locale categories.) LC_ALL If set to a non-empty string value, override the values of all the other internationalization variables. LC_COLLATE Determine the locale for the behavior of ranges, equivalence classes, and multi-character collating elements used in the extended regular expression defined for the yesexpr locale keyword in the LC_MESSAGES category. LC_CTYPE Determine the locale for the interpretation of sequences of bytes of text data as characters (for example, single-byte as opposed to multi-byte characters in arguments), the behavior of character classes used in the extended regular expression defined for the yesexpr locale keyword in the LC_MESSAGES category. LC_MESSAGES Determine the locale used to process affirmative responses, and the locale used to affect the format and contents of diagnostic messages, prompts, and the output from the -v option written to standard error. NLSPATH Determine the location of message catalogs for the processing of LC_MESSAGES. ASYNCHRONOUS EVENTS top Default. STDOUT top If no file operands are specified, or if a file operand is '-', or if the -c option is specified, the standard output contains the compressed output. STDERR top The standard error shall be used only for diagnostic and prompt messages and the output from -v. OUTPUT FILES top The output files shall contain the compressed output. The format of compressed files is unspecified and interchange of such files between implementations (including access via unspecified file sharing mechanisms) is not required by POSIX.12008. EXTENDED DESCRIPTION top None. EXIT STATUS top The following exit values shall be returned: 0 Successful completion. 1 An error occurred. 2 One or more files were not compressed because they would have increased in size (and the -f option was not specified). >2 An error occurred. CONSEQUENCES OF ERRORS top The input file shall remain unmodified. The following sections are informative. APPLICATION USAGE top The amount of compression obtained depends on the size of the input, the number of bits per code, and the distribution of common substrings. Typically, text such as source code or English is reduced by 5060%. Compression is generally much better than that achieved by Huffman coding or adaptive Huffman coding (compact), and takes less time to compute. Although compress strictly follows the default actions upon receipt of a signal or when an error occurs, some unexpected results may occur. In some implementations it is likely that a partially compressed file is left in place, alongside its uncompressed input file. Since the general operation of compress is to delete the uncompressed file only after the .Z file has been successfully filled, an application should always carefully check the exit status of compress before arbitrarily deleting files that have like-named neighbors with .Z suffixes. The limit of 14 on the bits option-argument is to achieve portability to all systems (within the restrictions imposed by the lack of an explicit published file format). Some implementations based on 16-bit architectures cannot support 15 or 16-bit uncompression. EXAMPLES top None. RATIONALE top None. FUTURE DIRECTIONS top None. SEE ALSO top uncompress(1p), zcat(1p) The Base Definitions volume of POSIX.12017, Chapter 8, Environment Variables, Section 12.2, Utility Syntax Guidelines COPYRIGHT top Portions of this text are reprinted and reproduced in electronic form from IEEE Std 1003.1-2017, Standard for Information Technology -- Portable Operating System Interface (POSIX), The Open Group Base Specifications Issue 7, 2018 Edition, Copyright (C) 2018 by the Institute of Electrical and Electronics Engineers, Inc and The Open Group. In the event of any discrepancy between this version and the original IEEE and The Open Group Standard, the original IEEE and The Open Group Standard is the referee document. The original Standard can be obtained online at http://www.opengroup.org/unix/online.html . Any typographical or formatting errors that appear in this page are most likely to have been introduced during the conversion of the source files to man page format. To report such errors, see https://www.kernel.org/doc/man-pages/reporting_bugs.html . IEEE/The Open Group 2017 COMPRESS(1P) Pages that refer to this page: uncompress(1p), zcat(1p) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # compress\n\n> Compress files using the Unix `compress` command.\n> More information: <https://manned.org/compress.1>.\n\n- Compress specific files:\n\n`compress {{path/to/file1 path/to/file2 ...}}`\n\n- Compress specific files, ignore non-existent ones:\n\n`compress -f {{path/to/file1 path/to/file2 ...}}`\n\n- Specify the maximum compression bits (9-16 bits):\n\n`compress -b {{bits}}`\n\n- Write to `stdout` (no files are changed):\n\n`compress -c {{path/to/file}}`\n\n- Decompress files (functions like `uncompress`):\n\n`compress -d {{path/to/file}}`\n\n- Display compression percentage:\n\n`compress -v {{path/to/file}}`\n |
coredumpctl | coredumpctl(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training coredumpctl(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | COMMANDS | OPTIONS | MATCHING | EXIT STATUS | ENVIRONMENT | EXAMPLES | SEE ALSO | NOTES | COLOPHON COREDUMPCTL(1) coredumpctl COREDUMPCTL(1) NAME top coredumpctl - Retrieve and process saved core dumps and metadata SYNOPSIS top coredumpctl [OPTIONS...] {COMMAND} [PID|COMM|EXE|MATCH...] DESCRIPTION top coredumpctl is a tool that can be used to retrieve and process core dumps and metadata which were saved by systemd-coredump(8). COMMANDS top The following commands are understood: list List core dumps captured in the journal matching specified characteristics. If no command is specified, this is the implied default. The output is designed to be human readable and contains a table with the following columns: TIME The timestamp of the crash, as reported by the kernel. Added in version 233. PID The identifier of the process that crashed. Added in version 233. UID, GID The user and group identifiers of the process that crashed. Added in version 233. SIGNAL The signal that caused the process to crash, when applicable. Added in version 233. COREFILE Information whether the coredump was stored, and whether it is still accessible: "none" means the core was not stored, "-" means that it was not available (for example because the process was not terminated by a signal), "present" means that the core file is accessible by the current user, "journal" means that the core was stored in the "journal", "truncated" is the same as one of the previous two, but the core was too large and was not stored in its entirety, "error" means that the core file cannot be accessed, most likely because of insufficient permissions, and "missing" means that the core was stored in a file, but this file has since been removed. Added in version 233. EXE The full path to the executable. For backtraces of scripts this is the name of the interpreter. Added in version 233. It's worth noting that different restrictions apply to data saved in the journal and core dump files saved in /var/lib/systemd/coredump, see overview in systemd-coredump(8). Thus it may very well happen that a particular core dump is still listed in the journal while its corresponding core dump file has already been removed. Added in version 215. info Show detailed information about the last core dump or core dumps matching specified characteristics captured in the journal. Added in version 215. dump Extract the last core dump matching specified characteristics. The core dump will be written on standard output, unless an output file is specified with --output=. Added in version 215. debug Invoke a debugger on the last core dump matching specified characteristics. By default, gdb(1) will be used. This may be changed using the --debugger= option or the $SYSTEMD_DEBUGGER environment variable. Use the --debugger-arguments= option to pass extra command line arguments to the debugger. Added in version 239. OPTIONS top The following options are understood: -h, --help Print a short help text and exit. --version Print a short version string and exit. --no-pager Do not pipe output into a pager. --no-legend Do not print the legend, i.e. column headers and the footer with hints. --json=MODE Shows output formatted as JSON. Expects one of "short" (for the shortest possible output without any redundant whitespace or line breaks), "pretty" (for a pretty version of the same, with indentation and line breaks) or "off" (to turn off JSON output, the default). -1 Show information of the most recent core dump only, instead of listing all known core dumps. Equivalent to --reverse -n 1. Added in version 215. -n INT Show at most the specified number of entries. The specified parameter must be an integer greater or equal to 1. Added in version 248. -S, --since Only print entries which are since the specified date. Added in version 233. -U, --until Only print entries which are until the specified date. Added in version 233. -r, --reverse Reverse output so that the newest entries are displayed first. Added in version 233. -F FIELD, --field=FIELD Print all possible data values the specified field takes in matching core dump entries of the journal. Added in version 215. -o FILE, --output=FILE Write the core to FILE. Added in version 215. --debugger=DEBUGGER Use the given debugger for the debug command. If not given and $SYSTEMD_DEBUGGER is unset, then gdb(1) will be used. Added in version 239. -A ARGS, --debugger-arguments=ARGS Pass the given ARGS as extra command line arguments to the debugger. Quote as appropriate when ARGS contain whitespace. (See Examples.) Added in version 248. --file=GLOB Takes a file glob as an argument. If specified, coredumpctl will operate on the specified journal files matching GLOB instead of the default runtime and system journal paths. May be specified multiple times, in which case files will be suitably interleaved. Added in version 246. -D DIR, --directory=DIR Use the journal files in the specified DIR. Added in version 225. --root=ROOT Use root directory ROOT when searching for coredumps. Added in version 252. --image=image Takes a path to a disk image file or block device node. If specified, all operations are applied to file system in the indicated disk image. This option is similar to --root=, but operates on file systems stored in disk images or block devices. The disk image should either contain just a file system or a set of file systems within a GPT partition table, following the Discoverable Partitions Specification[1]. For further information on supported disk images, see systemd-nspawn(1)'s switch of the same name. Added in version 252. --image-policy=policy Takes an image policy string as argument, as per systemd.image-policy(7). The policy is enforced when operating on the disk image specified via --image=, see above. If not specified defaults to the "*" policy, i.e. all recognized file systems in the image are used. -q, --quiet Suppresses informational messages about lack of access to journal files and possible in-flight coredumps. Added in version 233. --all Look at all available journal files in /var/log/journal/ (excluding journal namespaces) instead of only local ones. Added in version 250. MATCHING top A match can be: PID Process ID of the process that dumped core. An integer. Added in version 215. COMM Name of the executable (matches COREDUMP_COMM=). Must not contain slashes. Added in version 215. EXE Path to the executable (matches COREDUMP_EXE=). Must contain at least one slash. Added in version 215. MATCH General journalctl match filter, must contain an equals sign ("="). See journalctl(1). Added in version 215. EXIT STATUS top On success, 0 is returned; otherwise, a non-zero failure code is returned. Not finding any matching core dumps is treated as failure. ENVIRONMENT top $SYSTEMD_DEBUGGER Use the given debugger for the debug command. See the --debugger= option. Added in version 239. EXAMPLES top Example 1. List all the core dumps of a program $ coredumpctl list /usr/lib64/firefox/firefox TIME PID UID GID SIG COREFILE EXE SIZE Tue ... 8018 1000 1000 SIGSEGV missing /usr/lib64/firefox/firefox - Wed ... 251609 1000 1000 SIGTRAP missing /usr/lib64/firefox/firefox - Fri ... 552351 1000 1000 SIGSEGV present /usr/lib64/firefox/firefox 28.7M The journal has three entries pertaining to /usr/lib64/firefox/firefox, and only the last entry still has an available core file (in external storage on disk). Note that coredumpctl needs access to the journal files to retrieve the relevant entries from the journal. Thus, an unprivileged user will normally only see information about crashing programs of this user. Example 2. Invoke gdb on the last core dump $ coredumpctl debug Example 3. Use gdb to display full register info from the last core dump $ coredumpctl debug --debugger-arguments="-batch -ex 'info all-registers'" Example 4. Show information about a core dump matched by PID $ coredumpctl info 6654 PID: 6654 (bash) UID: 1000 (user) GID: 1000 (user) Signal: 11 (SEGV) Timestamp: Mon 2021-01-01 00:00:01 CET (20s ago) Command Line: bash -c $'kill -SEGV $$' Executable: /usr/bin/bash Control Group: /user.slice/user-1000.slice/... Unit: user@1000.service User Unit: vte-spawn-....scope Slice: user-1000.slice Owner UID: 1000 (user) Boot ID: ... Machine ID: ... Hostname: ... Storage: /var/lib/systemd/coredump/core.bash.1000.....zst (present) Size on Disk: 51.7K Message: Process 130414 (bash) of user 1000 dumped core. Stack trace of thread 130414: #0 0x00007f398142358b kill (libc.so.6 + 0x3d58b) #1 0x0000558c2c7fda09 kill_builtin (bash + 0xb1a09) #2 0x0000558c2c79dc59 execute_builtin.lto_priv.0 (bash + 0x51c59) #3 0x0000558c2c79709c execute_simple_command (bash + 0x4b09c) #4 0x0000558c2c798408 execute_command_internal (bash + 0x4c408) #5 0x0000558c2c7f6bdc parse_and_execute (bash + 0xaabdc) #6 0x0000558c2c85415c run_one_command.isra.0 (bash + 0x10815c) #7 0x0000558c2c77d040 main (bash + 0x31040) #8 0x00007f398140db75 __libc_start_main (libc.so.6 + 0x27b75) #9 0x0000558c2c77dd1e _start (bash + 0x31d1e) Example 5. Extract the last core dump of /usr/bin/bar to a file named bar.coredump $ coredumpctl -o bar.coredump dump /usr/bin/bar SEE ALSO top systemd-coredump(8), coredump.conf(5), systemd-journald.service(8), gdb(1) NOTES top 1. Discoverable Partitions Specification https://uapi-group.org/specifications/specs/discoverable_partitions_specification COLOPHON top This page is part of the systemd (systemd system and service manager) project. Information about the project can be found at http://www.freedesktop.org/wiki/Software/systemd. If you have a bug report for this manual page, see http://www.freedesktop.org/wiki/Software/systemd/#bugreports. This page was obtained from the project's upstream Git repository https://github.com/systemd/systemd.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-22.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org systemd 255 COREDUMPCTL(1) Pages that refer to this page: journalctl(1), core(5), coredump.conf(5), systemd.directives(7), systemd.index(7), systemd.journal-fields(7), systemd-coredump(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # coredumpctl\n\n> Retrieve and process saved core dumps and metadata.\n> More information: <https://www.freedesktop.org/software/systemd/man/coredumpctl.html>.\n\n- List all captured core dumps:\n\n`coredumpctl list`\n\n- List captured core dumps for a program:\n\n`coredumpctl list {{program}}`\n\n- Show information about the core dumps matching a program with `PID`:\n\n`coredumpctl info {{PID}}`\n\n- Invoke debugger using the last core dump of a program:\n\n`coredumpctl debug {{program}}`\n\n- Extract the last core dump of a program to a file:\n\n`coredumpctl --output={{path/to/file}} dump {{program}}`\n |
cp | cp(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training cp(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | AUTHOR | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON CP(1) User Commands CP(1) NAME top cp - copy files and directories SYNOPSIS top cp [OPTION]... [-T] SOURCE DEST cp [OPTION]... SOURCE... DIRECTORY cp [OPTION]... -t DIRECTORY SOURCE... DESCRIPTION top Copy SOURCE to DEST, or multiple SOURCE(s) to DIRECTORY. Mandatory arguments to long options are mandatory for short options too. -a, --archive same as -dR --preserve=all --attributes-only don't copy the file data, just the attributes --backup[=CONTROL] make a backup of each existing destination file -b like --backup but does not accept an argument --copy-contents copy contents of special files when recursive -d same as --no-dereference --preserve=links --debug explain how a file is copied. Implies -v -f, --force if an existing destination file cannot be opened, remove it and try again (this option is ignored when the -n option is also used) -i, --interactive prompt before overwrite (overrides a previous -n option) -H follow command-line symbolic links in SOURCE -l, --link hard link files instead of copying -L, --dereference always follow symbolic links in SOURCE -n, --no-clobber do not overwrite an existing file (overrides a -u or previous -i option). See also --update -P, --no-dereference never follow symbolic links in SOURCE -p same as --preserve=mode,ownership,timestamps --preserve[=ATTR_LIST] preserve the specified attributes --no-preserve=ATTR_LIST don't preserve the specified attributes --parents use full source file name under DIRECTORY -R, -r, --recursive copy directories recursively --reflink[=WHEN] control clone/CoW copies. See below --remove-destination remove each existing destination file before attempting to open it (contrast with --force) --sparse=WHEN control creation of sparse files. See below --strip-trailing-slashes remove any trailing slashes from each SOURCE argument -s, --symbolic-link make symbolic links instead of copying -S, --suffix=SUFFIX override the usual backup suffix -t, --target-directory=DIRECTORY copy all SOURCE arguments into DIRECTORY -T, --no-target-directory treat DEST as a normal file --update[=UPDATE] control which existing files are updated; UPDATE={all,none,older(default)}. See below -u equivalent to --update[=older] -v, --verbose explain what is being done -x, --one-file-system stay on this file system -Z set SELinux security context of destination file to default type --context[=CTX] like -Z, or if CTX is specified then set the SELinux or SMACK security context to CTX --help display this help and exit --version output version information and exit ATTR_LIST is a comma-separated list of attributes. Attributes are 'mode' for permissions (including any ACL and xattr permissions), 'ownership' for user and group, 'timestamps' for file timestamps, 'links' for hard links, 'context' for security context, 'xattr' for extended attributes, and 'all' for all attributes. By default, sparse SOURCE files are detected by a crude heuristic and the corresponding DEST file is made sparse as well. That is the behavior selected by --sparse=auto. Specify --sparse=always to create a sparse DEST file whenever the SOURCE file contains a long enough sequence of zero bytes. Use --sparse=never to inhibit creation of sparse files. UPDATE controls which existing files in the destination are replaced. 'all' is the default operation when an --update option is not specified, and results in all existing files in the destination being replaced. 'none' is similar to the --no-clobber option, in that no files in the destination are replaced, but also skipped files do not induce a failure. 'older' is the default operation when --update is specified, and results in files being replaced if they're older than the corresponding source file. When --reflink[=always] is specified, perform a lightweight copy, where the data blocks are copied only when modified. If this is not possible the copy fails, or if --reflink=auto is specified, fall back to a standard copy. Use --reflink=never to ensure a standard copy is performed. The backup suffix is '~', unless set with --suffix or SIMPLE_BACKUP_SUFFIX. The version control method may be selected via the --backup option or through the VERSION_CONTROL environment variable. Here are the values: none, off never make backups (even if --backup is given) numbered, t make numbered backups existing, nil numbered if numbered backups exist, simple otherwise simple, never always make simple backups As a special case, cp makes a backup of SOURCE when the force and backup options are given and SOURCE and DEST are the same name for an existing, regular file. AUTHOR top Written by Torbjorn Granlund, David MacKenzie, and Jim Meyering. REPORTING BUGS top GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT top Copyright 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top install(1) Full documentation <https://www.gnu.org/software/coreutils/cp> or available locally via: info '(coreutils) cp invocation' COLOPHON top This page is part of the coreutils (basic file, shell and text manipulation utilities) project. Information about the project can be found at http://www.gnu.org/software/coreutils/. If you have a bug report for this manual page, see http://www.gnu.org/software/coreutils/. This page was obtained from the tarball coreutils-9.4.tar.xz fetched from http://ftp.gnu.org/gnu/coreutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU coreutils 9.4 August 2023 CP(1) Pages that refer to this page: install(1), pmlogmv(1), rsync(1), cpuset(7), symlink(7), e2image(8), readprofile(8), swapon(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # cp\n\n> Copy files and directories.\n> More information: <https://www.gnu.org/software/coreutils/cp>.\n\n- Copy a file to another location:\n\n`cp {{path/to/source_file.ext}} {{path/to/target_file.ext}}`\n\n- Copy a file into another directory, keeping the filename:\n\n`cp {{path/to/source_file.ext}} {{path/to/target_parent_directory}}`\n\n- Recursively copy a directory's contents to another location (if the destination exists, the directory is copied inside it):\n\n`cp -r {{path/to/source_directory}} {{path/to/target_directory}}`\n\n- Copy a directory recursively, in verbose mode (shows files as they are copied):\n\n`cp -vr {{path/to/source_directory}} {{path/to/target_directory}}`\n\n- Copy multiple files at once to a directory:\n\n`cp -t {{path/to/destination_directory}} {{path/to/file1 path/to/file2 ...}}`\n\n- Copy all files with a specific extension to another location, in interactive mode (prompts user before overwriting):\n\n`cp -i {{*.ext}} {{path/to/target_directory}}`\n\n- Follow symbolic links before copying:\n\n`cp -L {{link}} {{path/to/target_directory}}`\n\n- Use the full path of source files, creating any missing intermediate directories when copying:\n\n`cp --parents {{source/path/to/file}} {{path/to/target_file}}`\n |
cron | cron(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training cron(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | SIGNALS | CLUSTERING SUPPORT | CAVEATS | SEE ALSO | AUTHOR | COLOPHON CRON(8) System Administration CRON(8) NAME top crond - daemon to execute scheduled commands SYNOPSIS top crond [-c | -h | -i | -n | -p | -P | -s | -m<mailcommand>] crond -x [ext,sch,proc,pars,load,misc,test,bit] crond -V DESCRIPTION top Cron is started from /etc/rc.d/init.d or /etc/init.d when classical sysvinit scripts are used. In case systemd is enabled, then unit file is installed into /lib/systemd/system/crond.service and daemon is started by systemctl start crond.service command. It returns immediately, thus, there is no need to need to start it with the '&' parameter. Cron searches /var/spool/cron for crontab files which are named after accounts in /etc/passwd; The found crontabs are loaded into the memory. Cron also searches for /etc/anacrontab and any files in the /etc/cron.d directory, which have a different format (see crontab(5)). Cron examines all stored crontabs and checks each job to see if it needs to be run in the current minute. When executing commands, any output is mailed to the owner of the crontab (or to the user specified in the MAILTO environment variable in the crontab, if such exists). Any job output can also be sent to syslog by using the -s option. There are two ways how changes in crontables are checked. The first method is checking the modtime of a file. The second method is using the inotify support. Using of inotify is logged in the /var/log/cron log after the daemon is started. The inotify support checks for changes in all crontables and accesses the hard disk only when a change is detected. When using the modtime option, Cron checks its crontables' modtimes every minute to check for any changes and reloads the crontables which have changed. There is no need to restart Cron after some of the crontables were modified. The modtime option is also used when inotify can not be initialized. Cron checks these files and directories: /etc/crontab system crontab. Nowadays the file is empty by default. Originally it was usually used to run daily, weekly, monthly jobs. By default these jobs are now run through anacron which reads /etc/anacrontab configuration file. See anacrontab(5) for more details. /etc/cron.d/ directory that contains system cronjobs stored for different users. /var/spool/cron directory that contains user crontables created by the crontab command. Note that the crontab(1) command updates the modtime of the spool directory whenever it changes a crontab. Daylight Saving Time and other time changes Local time changes of less than three hours, such as those caused by the Daylight Saving Time changes, are handled in a special way. This only applies to jobs that run at a specific time and jobs that run with a granularity greater than one hour. Jobs that run more frequently are scheduled normally. If time was adjusted one hour forward, those jobs that would have run in the interval that has been skipped will be run immediately. Conversely, if time was adjusted backward, running the same job twice is avoided. Time changes of more than 3 hours are considered to be corrections to the clock or the timezone, and the new time is used immediately. It is possible to use different time zones for crontables. See crontab(5) for more information. PAM Access Control Cron supports access control with PAM if the system has PAM installed. For more information, see pam(8). A PAM configuration file for crond is installed in /etc/pam.d/crond. The daemon loads the PAM environment from the pam_env module. This can be overridden by defining specific settings in the appropriate crontab file. OPTIONS top -h Prints a help message and exits. -i Disables inotify support. -m This option allows you to specify a shell command to use for sending Cron mail output instead of using sendmail(8) This command must accept a fully formatted mail message (with headers) on standard input and send it as a mail message to the recipients specified in the mail headers. Specifying the string off (i.e., crond -m off) will disable the sending of mail. -n Tells the daemon to run in the foreground. This can be useful when starting it out of init. With this option is needed to change pam setting. /etc/pam.d/crond must not enable pam_loginuid.so module. -f the same as -n, consistent with other crond implementations. -p Allows Cron to accept any user set crontables. -P Don't set PATH. PATH is instead inherited from the environment. -c This option enables clustering support, as described below. -s This option will direct Cron to send the job output to the system log using syslog(3). This is useful if your system does not have sendmail(8) installed or if mail is disabled. -x This option allows you to set debug flags. -V Print version and exit. SIGNALS top When the SIGHUP is received, the Cron daemon will close and reopen its log file. This proves to be useful in scripts which rotate and age log files. Naturally, this is not relevant if Cron was built to use syslog(3). CLUSTERING SUPPORT top In this version of Cron it is possible to use a network-mounted shared /var/spool/cron across a cluster of hosts and specify that only one of the hosts should run the crontab jobs in this directory at any one time. This is done by starting Cron with the -c option, and have the /var/spool/cron/.cron.hostname file contain just one line, which represents the hostname of whichever host in the cluster should run the jobs. If this file does not exist, or the hostname in it does not match that returned by gethostname(2), then all crontab files in this directory are ignored. This has no effect on cron jobs specified in the /etc/crontab file or on files in the /etc/cron.d directory. These files are always run and considered host-specific. Rather than editing /var/spool/cron/.cron.hostname directly, use the -n option of crontab(1) to specify the host. You should ensure that all hosts in a cluster, and the file server from which they mount the shared crontab directory, have closely synchronised clocks, e.g., using ntpd(8), otherwise the results will be very unpredictable. Using cluster sharing automatically disables inotify support, because inotify cannot be relied on with network-mounted shared file systems. CAVEATS top All crontab files have to be regular files or symlinks to regular files, they must not be executable or writable for anyone else but the owner. This requirement can be overridden by using the -p option on the crond command line. If inotify support is in use, changes in the symlinked crontabs are not automatically noticed by the cron daemon. The cron daemon must receive a SIGHUP signal to reload the crontabs. This is a limitation of the inotify API. The syslog output will be used instead of mail, when sendmail is not installed. SEE ALSO top crontab(1), crontab(5), inotify(7), pam(8) AUTHOR top Paul Vixie vixie@isc.org Marcela Malov mmaslano@redhat.com Colin Dean colin@colin-dean.org Tom Mrz tmraz@fedoraproject.org COLOPHON top This page is part of the cronie (crond daemon) project. Information about the project can be found at https://github.com/cronie-crond/cronie. If you have a bug report for this manual page, see https://github.com/cronie-crond/cronie/issues. This page was obtained from the project's upstream Git repository https://github.com/cronie-crond/cronie.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-11-16.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org cronie 2013-09-26 CRON(8) Pages that refer to this page: cronnext(1), crontab(1), pmfind_check(1), pmie(1), pmie_check(1), pmlogger(1), pmlogger_check(1), pmlogger_daily(1), crontab(5), passwd(5), pmlogger.control(5), hier(7), keyrings(7), persistent-keyring(7), user-keyring(7), anacron(8), fstrim(8), warnquota(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # cron\n\n> A system scheduler for running jobs or tasks unattended.\n> The command to submit, edit or delete entries to `cron` is called `crontab`.\n\n- View documentation for managing `cron` entries:\n\n`tldr crontab`\n |
crontab | crontab(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training crontab(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | CAVEATS | SEE ALSO | FILES | STANDARDS | DIAGNOSTICS | AUTHOR | COLOPHON CRONTAB(1) User Commands CRONTAB(1) NAME top crontab - maintains crontab files for individual users SYNOPSIS top crontab [-u user] <file | -> crontab [-T] <file | -> crontab [-u user] <-l | -r | -e> [-i] [-s] crontab -n [ hostname ] crontab -c crontab -V DESCRIPTION top Crontab is the program used to install a crontab table file, remove or list the existing tables used to serve the cron(8) daemon. Each user can have their own crontab, and though these are files in /var/spool/, they are not intended to be edited directly. For SELinux in MLS mode, you can define more crontabs for each range. For more information, see selinux(8). In this version of Cron it is possible to use a network-mounted shared /var/spool/cron across a cluster of hosts and specify that only one of the hosts should run the crontab jobs in the particular directory at any one time. You may also use crontab from any of these hosts to edit the same shared set of crontab files, and to set and query which host should run the crontab jobs. Scheduling cron jobs with crontab can be allowed or disallowed for different users. For this purpose, use the cron.allow and cron.deny files. If the cron.allow file exists, a user must be listed in it to be allowed to use crontab. If the cron.allow file does not exist but the cron.deny file does exist, then a user must not be listed in the cron.deny file in order to use crontab. If neither of these files exist, then only the super user is allowed to use crontab. Another way to restrict the scheduling of cron jobs beyond crontab is to use PAM authentication in /etc/security/access.conf to set up users, which are allowed or disallowed to use crontab or modify system cron jobs in the /etc/cron.d/ directory. The temporary directory can be set in an environment variable. If it is not set by the user, the /tmp directory is used. When listing a crontab on a terminal the output will be colorized unless an environment variable NO_COLOR is set. On edition or deletion of the crontab, a backup of the last crontab will be saved to $XDG_CACHE_HOME/crontab/crontab.bak or $XDG_CACHE_HOME/crontab/crontab.<user>.bak if -u is used. If the XDG_CACHE_HOME environment variable is not set, $HOME/.cache will be used instead. OPTIONS top -u Specifies the name of the user whose crontab is to be modified. If this option is not used, crontab examines "your" crontab, i.e., the crontab of the person executing the command. If no crontab exists for a particular user, it is created for them the first time the crontab -u command is used under their username. -T Test the crontab file syntax without installing it. Once an issue is found, the validation is interrupted, so this will not return all the existing issues at the same execution. -l Displays the current crontab on standard output. -r Removes the current crontab. -e Edits the current crontab using the editor specified by the VISUAL or EDITOR environment variables. After you exit from the editor, the modified crontab will be installed automatically. -i This option modifies the -r option to prompt the user for a 'y/Y' response before actually removing the crontab. -s Appends the current SELinux security context string as an MLS_LEVEL setting to the crontab file before editing / replacement occurs - see the documentation of MLS_LEVEL in crontab(5). -n This option is relevant only if cron(8) was started with the -c option, to enable clustering support. It is used to set the host in the cluster which should run the jobs specified in the crontab files in the /var/spool/cron directory. If a hostname is supplied, the host whose hostname returned by gethostname(2) matches the supplied hostname, will be selected to run the selected cron jobs subsequently. If there is no host in the cluster matching the supplied hostname, or you explicitly specify an empty hostname, then the selected jobs will not be run at all. If the hostname is omitted, the name of the local host returned by gethostname(2) is used. Using this option has no effect on the /etc/crontab file and the files in the /etc/cron.d directory, which are always run, and considered host-specific. For more information on clustering support, see cron(8). -c This option is only relevant if cron(8) was started with the -c option, to enable clustering support. It is used to query which host in the cluster is currently set to run the jobs specified in the crontab files in the directory /var/spool/cron , as set using the -n option. -V Print version and exit. CAVEATS top The files cron.allow and cron.deny cannot be used to restrict the execution of cron jobs; they only restrict the use of crontab. In particular, restricting access to crontab has no effect on an existing crontab of a user. Its jobs will continue to be executed until the crontab is removed. The files cron.allow and cron.deny must be readable by the user invoking crontab. If this is not the case, then they are treated as non-existent. SEE ALSO top crontab(5), cron(8) FILES top /etc/cron.allow /etc/cron.deny STANDARDS top The crontab command conforms to IEEE Std1003.2-1992 (``POSIX'') with one exception: For replacing the current crontab with data from standard input the - has to be specified on the command line if the standard input is a TTY. This new command syntax differs from previous versions of Vixie Cron, as well as from the classic SVR3 syntax. DIAGNOSTICS top An informative usage message appears if you run a crontab with a faulty command defined in it. AUTHOR top Paul Vixie vixie@isc.org Colin Dean colin@colin-dean.org COLOPHON top This page is part of the cronie (crond daemon) project. Information about the project can be found at https://github.com/cronie-crond/cronie. If you have a bug report for this manual page, see https://github.com/cronie-crond/cronie/issues. This page was obtained from the project's upstream Git repository https://github.com/cronie-crond/cronie.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-11-16.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org cronie 2019-10-29 CRONTAB(1) Pages that refer to this page: cronnext(1), pmsnap(1), anacrontab(5), crontab(5), systemd.exec(5), cron(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # crontab\n\n> Schedule cron jobs to run on a time interval for the current user.\n> More information: <https://crontab.guru/>.\n\n- Edit the crontab file for the current user:\n\n`crontab -e`\n\n- Edit the crontab file for a specific user:\n\n`sudo crontab -e -u {{user}}`\n\n- Replace the current crontab with the contents of the given file:\n\n`crontab {{path/to/file}}`\n\n- View a list of existing cron jobs for current user:\n\n`crontab -l`\n\n- Remove all cron jobs for the current user:\n\n`crontab -r`\n\n- Sample job which runs at 10:00 every day (* means any value):\n\n`0 10 * * * {{command_to_execute}}`\n\n- Sample crontab entry, which runs a command every 10 minutes:\n\n`*/10 * * * * {{command_to_execute}}`\n\n- Sample crontab entry, which runs a certain script at 02:30 every Friday:\n\n`30 2 * * Fri {{/absolute/path/to/script.sh}}`\n |
cryptsetup | cryptsetup(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training cryptsetup(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | BASIC ACTIONS | PLAIN MODE | LUKS EXTENSION | LOOP-AES EXTENSION | TCRYPT (TRUECRYPT AND VERACRYPT COMPATIBLE) EXTENSION | BITLK (WINDOWS BITLOCKER COMPATIBLE) EXTENSION | FVAULT2 (APPLE MACOS FILEVAULT2 COMPATIBLE) EXTENSION | MISCELLANEOUS ACTIONS | PLAIN DM-CRYPT OR LUKS? | WARNINGS | EXAMPLES | RETURN CODES | NOTES | AUTHORS | REPORTING BUGS | SEE ALSO | CRYPTSETUP CRYPTSETUP(8) Maintenance Commands CRYPTSETUP(8) NAME top cryptsetup - manage plain dm-crypt, LUKS, and other encrypted volumes SYNOPSIS top cryptsetup <action> [<options>] <action args> DESCRIPTION top cryptsetup is used to conveniently setup dm-crypt managed device-mapper mappings. These include plain dm-crypt volumes and LUKS volumes. The difference is that LUKS uses a metadata header and can hence offer more features than plain dm-crypt. On the other hand, the header is visible and vulnerable to damage. In addition, cryptsetup provides limited support for the use of loop-AES volumes, TrueCrypt, VeraCrypt, BitLocker and FileVault2 compatible volumes. For more information about specific cryptsetup action see cryptsetup-<action>(8), where <action> is the name of the cryptsetup action. BASIC ACTIONS top The following are valid actions for all supported device types. OPEN open <device> <name> --type <device_type> Opens (creates a mapping with) <name> backed by device <device>. See cryptsetup-open(8). CLOSE close <name> Removes the existing mapping <name> and wipes the key from kernel memory. See cryptsetup-close(8). STATUS status <name> Reports the status for the mapping <name>. See cryptsetup-status(8). RESIZE resize <name> Resizes an active mapping <name>. See cryptsetup-resize(8). REFRESH refresh <name> Refreshes parameters of active mapping <name>. See cryptsetup-refresh(8). REENCRYPT reencrypt <device> or --active-name <name> [<new_name>] Run LUKS device reencryption. See cryptsetup-reencrypt(8). PLAIN MODE top Plain dm-crypt encrypts the device sector-by-sector with a single, non-salted hash of the passphrase. No checks are performed, no metadata is used. There is no formatting operation. When the raw device is mapped (opened), the usual device operations can be used on the mapped device, including filesystem creation. Mapped devices usually reside in /dev/mapper/<name>. The following are valid plain device type actions: OPEN open --type plain <device> <name> create <name> <device> (OBSOLETE syntax) Opens (creates a mapping with) <name> backed by device <device>. See cryptsetup-open(8). LUKS EXTENSION top LUKS, the Linux Unified Key Setup, is a standard for disk encryption. It adds a standardized header at the start of the device, a key-slot area directly behind the header and the bulk data area behind that. The whole set is called a 'LUKS container'. The device that a LUKS container resides on is called a 'LUKS device'. For most purposes, both terms can be used interchangeably. But note that when the LUKS header is at a nonzero offset in a device, then the device is not a LUKS device anymore, but has a LUKS container stored in it at an offset. LUKS can manage multiple passphrases that can be individually revoked or changed and that can be securely scrubbed from persistent media due to the use of anti-forensic stripes. Passphrases are protected against brute-force and dictionary attacks by Password-Based Key Derivation Function (PBKDF). LUKS2 is a new version of header format that allows additional extensions like different PBKDF algorithm or authenticated encryption. You can format device with LUKS2 header if you specify --type luks2 in luksFormat command. For activation, the format is already recognized automatically. Each passphrase, also called a key in this document, is associated with one of 8 key-slots. Key operations that do not specify a slot affect the first slot that matches the supplied passphrase or the first empty slot if a new passphrase is added. The <device> parameter can also be specified by a LUKS UUID in the format UUID=<uuid>. Translation to real device name uses symlinks in /dev/disk/by-uuid directory. To specify a detached header, the --header parameter can be used in all LUKS commands and always takes precedence over the positional <device> parameter. The following are valid LUKS actions: FORMAT luksFormat <device> [<key file>] Initializes a LUKS partition and sets the initial passphrase (for key-slot 0). See cryptsetup-luksFormat(8). OPEN open --type luks <device> <name> luksOpen <device> <name> (old syntax) Opens the LUKS device <device> and sets up a mapping <name> after successful verification of the supplied passphrase. See cryptsetup-open(8). SUSPEND luksSuspend <name> Suspends an active device (all IO operations will block and accesses to the device will wait indefinitely) and wipes the encryption key from kernel memory. See cryptsetup-luksSuspend(8). RESUME luksResume <name> Resumes a suspended device and reinstates the encryption key. See cryptsetup-luksResume(8). ADD KEY luksAddKey <device> [<key file with new key>] Adds a new passphrase using an existing passphrase. See cryptsetup-luksAddKey(8). REMOVE KEY luksRemoveKey <device> [<key file with passphrase to be removed>] Removes the supplied passphrase from the LUKS device. See cryptsetup-luksRemoveKey(8). CHANGE KEY luksChangeKey <device> [<new key file>] Changes an existing passphrase. See cryptsetup-luksChangeKey(8). CONVERT KEY luksConvertKey <device> Converts an existing LUKS2 keyslot to new PBKDF parameters. See cryptsetup-luksConvertKey(8). KILL SLOT luksKillSlot <device> <key slot number> Wipe the key-slot number <key slot> from the LUKS device. See cryptsetup-luksKillSlot(8). ERASE erase <device> luksErase <device> (old syntax) Erase all keyslots and make the LUKS container permanently inaccessible. See cryptsetup-erase(8). UUID luksUUID <device> Print or set the UUID of a LUKS device. See cryptsetup-luksUUID(8). IS LUKS isLuks <device> Returns true, if <device> is a LUKS device, false otherwise. See cryptsetup-isLuks(8). DUMP luksDump <device> Dump the header information of a LUKS device. See cryptsetup-luksDump(8). HEADER BACKUP luksHeaderBackup <device> --header-backup-file <file> Stores a binary backup of the LUKS header and keyslot area. See cryptsetup-luksHeaderBackup(8). HEADER RESTORE luksHeaderRestore <device> --header-backup-file <file> Restores a binary backup of the LUKS header and keyslot area from the specified file. See cryptsetup-luksHeaderRestore(8). TOKEN token <add|remove|import|export> <device> Manipulate token objects used for obtaining passphrases. See cryptsetup-token(8). CONVERT convert <device> --type <format> Converts the device between LUKS1 and LUKS2 format (if possible). See cryptsetup-convert(8). CONFIG config <device> Set permanent configuration options (store to LUKS header). See cryptsetup-config(8). LOOP-AES EXTENSION top cryptsetup supports mapping loop-AES encrypted partition using a compatibility mode. OPEN open --type loopaes <device> <name> --key-file <keyfile> loopaesOpen <device> <name> --key-file <keyfile> (old syntax) Opens the loop-AES <device> and sets up a mapping <name>. See cryptsetup-open(8). See also section 7 of the FAQ and loop-AES <http://loop-aes.sourceforge.net> for more information regarding loop-AES. TCRYPT (TRUECRYPT AND VERACRYPT COMPATIBLE) EXTENSION top cryptsetup supports mapping of TrueCrypt, tcplay or VeraCrypt encrypted partition using a native Linux kernel API. Header formatting and TCRYPT header change is not supported, cryptsetup never changes TCRYPT header on-device. TCRYPT extension requires kernel userspace crypto API to be available (introduced in Linux kernel 2.6.38). If you are configuring kernel yourself, enable "User-space interface for symmetric key cipher algorithms" in "Cryptographic API" section (CRYPTO_USER_API_SKCIPHER .config option). Because TCRYPT header is encrypted, you have to always provide valid passphrase and keyfiles. Cryptsetup should recognize all header variants, except legacy cipher chains using LRW encryption mode with 64 bits encryption block (namely Blowfish in LRW mode is not recognized, this is limitation of kernel crypto API). VeraCrypt is extension of TrueCrypt header with increased iteration count so unlocking can take quite a lot of time. To open a VeraCrypt device with a custom Personal Iteration Multiplier (PIM) value, use either the --veracrypt-pim=<PIM> option to directly specify the PIM on the command- line or use --veracrypt-query-pim to be prompted for the PIM. The PIM value affects the number of iterations applied during key derivation. Please refer to PIM <https://www.veracrypt.fr/en/Personal%20Iterations%20Multiplier%20%28PIM%29.html> for more detailed information. If you need to disable VeraCrypt device support, use --disable-veracrypt option. NOTE: Activation with tcryptOpen is supported only for cipher chains using LRW or XTS encryption modes. The tcryptDump command should work for all recognized TCRYPT devices and doesnt require superuser privilege. To map system device (device with boot loader where the whole encrypted system resides) use --tcrypt-system option. You can use partition device as the parameter (parameter must be real partition device, not an image in a file), then only this partition is mapped. If you have the whole TCRYPT device as a file image and you want to map multiple partition encrypted with system encryption, please create loopback mapping with partitions first (losetup -P, see losetup(8) man page for more info), and use loop partition as the device parameter. If you use the whole base device as a parameter, one device for the whole system encryption is mapped. This mode is available only for backward compatibility with older cryptsetup versions which mapped TCRYPT system encryption using the whole device. To use hidden header (and map hidden device, if available), use --tcrypt-hidden option. To explicitly use backup (secondary) header, use --tcrypt-backup option. NOTE: There is no protection for a hidden volume if the outer volume is mounted. The reason is that if there were any protection, it would require some metadata describing what to protect in the outer volume and the hidden volume would become detectable. OPEN open --type tcrypt <device> <name> tcryptOpen_ <device> <name> (old syntax) Opens the TCRYPT (a TrueCrypt-compatible) <device> and sets up a mapping <name>. See cryptsetup-open(8). DUMP tcryptDump <device> Dump the header information of a TCRYPT device. See cryptsetup-tcryptDump(8). See also TrueCrypt <https://en.wikipedia.org/wiki/TrueCrypt> and VeraCrypt <https://en.wikipedia.org/wiki/VeraCrypt> pages for more information. Please note that cryptsetup does not use TrueCrypt or VeraCrypt code, please report all problems related to this compatibility extension to the cryptsetup project. BITLK (WINDOWS BITLOCKER COMPATIBLE) EXTENSION top cryptsetup supports mapping of BitLocker and BitLocker to Go encrypted partition using a native Linux kernel API. Header formatting and BITLK header changes are not supported, cryptsetup never changes BITLK header on-device. BITLK extension requires kernel userspace crypto API to be available (for details see TCRYPT section). Cryptsetup should recognize all BITLK header variants, except legacy header used in Windows Vista systems and partially decrypted BitLocker devices. Activation of legacy devices encrypted in CBC mode requires at least Linux kernel version 5.3 and for devices using Elephant diffuser kernel 5.6. The bitlkDump command should work for all recognized BITLK devices and doesnt require superuser privilege. For unlocking with the open a password or a recovery passphrase or a startup key must be provided. Additionally unlocking using volume key is supported. You must provide BitLocker Full Volume Encryption Key (FVEK) using the --volume-key-file option. The key must be decrypted and without the header (only 128/256/512 bits of key data depending on used cipher and mode). Other unlocking methods (TPM, SmartCard) are not supported. OPEN open --type bitlk <device> <name> bitlkOpen <device> <name> (old syntax) Opens the BITLK (a BitLocker-compatible) <device> and sets up a mapping <name>. See cryptsetup-open(8). DUMP bitlkDump <device> Dump the header information of a BITLK device. See cryptsetup-bitlkDump(8). Please note that cryptsetup does not use any Windows BitLocker code, please report all problems related to this compatibility extension to the cryptsetup project. FVAULT2 (APPLE MACOS FILEVAULT2 COMPATIBLE) EXTENSION top cryptsetup supports the mapping of FileVault2 (FileVault2 full-disk encryption) by Apple for the macOS operating system using a native Linux kernel API. NOTE: cryptsetup supports only FileVault2 based on Core Storage and HFS+ filesystem (introduced in MacOS X 10.7 Lion). It does NOT support the new version of FileVault based on the APFS filesystem used in recent macOS versions. Header formatting and FVAULT2 header changes are not supported; cryptsetup never changes the FVAULT2 header on-device. FVAULT2 extension requires kernel userspace crypto API to be available (for details, see TCRYPT section) and kernel driver for HFS+ (hfsplus) filesystem. Cryptsetup should recognize the basic configuration for portable drives. The fvault2Dump command should work for all recognized FVAULT2 devices and doesnt require superuser privilege. For unlocking with the open, a password must be provided. Other unlocking methods are not supported. OPEN open --type fvault2 <device> <name> fvault2Open <device> <name> (old syntax) Opens the FVAULT2 (a FileVault2-compatible) <device> (usually the second partition on the device) and sets up a mapping <name>. See cryptsetup-open(8). DUMP fvault2Dump <device> Dump the header information of an FVAULT2 device. See cryptsetup-fvault2Dump(8). Note that cryptsetup does not use any macOS code or proprietary specifications. Please report all problems related to this compatibility extension to the cryptsetup project. MISCELLANEOUS ACTIONS top REPAIR repair <device> Tries to repair the device metadata if possible. Currently supported only for LUKS device type. See cryptsetup-repair(8). BENCHMARK benchmark <options> Benchmarks ciphers and KDF (key derivation function). See cryptsetup-benchmark(8). PLAIN DM-CRYPT OR LUKS? top Unless you understand the cryptographic background well, use LUKS. With plain dm-crypt there are a number of possible user errors that massively decrease security. While LUKS cannot fix them all, it can lessen the impact for many of them. WARNINGS top A lot of good information on the risks of using encrypted storage, on handling problems and on security aspects can be found in the Cryptsetup FAQ. Read it. Nonetheless, some risks deserve to be mentioned here. Backup: Storage media die. Encryption has no influence on that. Backup is mandatory for encrypted data as well, if the data has any worth. See the Cryptsetup FAQ for advice on how to do a backup of an encrypted volume. Character encoding: If you enter a passphrase with special symbols, the passphrase can change depending on character encoding. Keyboard settings can also change, which can make blind input hard or impossible. For example, switching from some ASCII 8-bit variant to UTF-8 can lead to a different binary encoding and hence different passphrase seen by cryptsetup, even if what you see on the terminal is exactly the same. It is therefore highly recommended to select passphrase characters only from 7-bit ASCII, as the encoding for 7-bit ASCII stays the same for all ASCII variants and UTF-8. LUKS header: If the header of a LUKS volume gets damaged, all data is permanently lost unless you have a header-backup. If a key-slot is damaged, it can only be restored from a header-backup or if another active key-slot with known passphrase is undamaged. Damaging the LUKS header is something people manage to do with surprising frequency. This risk is the result of a trade-off between security and safety, as LUKS is designed for fast and secure wiping by just overwriting header and key-slot area. Previously used partitions: If a partition was previously used, it is a very good idea to wipe filesystem signatures, data, etc. before creating a LUKS or plain dm-crypt container on it. For a quick removal of filesystem signatures, use wipefs(8). Take care though that this may not remove everything. In particular, MD RAID signatures at the end of a device may survive. It also does not remove data. For a full wipe, overwrite the whole partition before container creation. If you do not know how to do that, the cryptsetup FAQ describes several options. EXAMPLES top Example 1: Create LUKS 2 container on block device /dev/sdX. sudo cryptsetup --type luks2 luksFormat /dev/sdX Example 2: Add an additional passphrase to key slot 5. sudo cryptsetup luksAddKey --key-slot 5 /dev/sdX Example 3: Create LUKS header backup and save it to file. sudo cryptsetup luksHeaderBackup /dev/sdX --header-backup-file /var/tmp/NameOfBackupFile Example 4: Open LUKS container on /dev/sdX and map it to sdX_crypt. sudo cryptsetup open /dev/sdX sdX_crypt WARNING: The command in example 5 will erase all key slots. Your cannot use your LUKS container afterward anymore unless you have a backup to restore. Example 5: Erase all key slots on /dev/sdX. sudo cryptsetup erase /dev/sdX Example 6: Restore LUKS header from backup file. sudo cryptsetup luksHeaderRestore /dev/sdX --header-backup-file /var/tmp/NameOfBackupFile RETURN CODES top Cryptsetup returns 0 on success and a non-zero value on error. Error codes are: 1 wrong parameters, 2 no permission (bad passphrase), 3 out of memory, 4 wrong device specified, 5 device already exists or device is busy. NOTES top Passphrase processing for PLAIN mode Note that no iterated hashing or salting is done in plain mode. If hashing is done, it is a single direct hash. This means that low-entropy passphrases are easy to attack in plain mode. From a terminal: The passphrase is read until the first newline, i.e. '\n'. The input without the newline character is processed with the default hash or the hash specified with --hash. The hash result will be truncated to the key size of the used cipher, or the size specified with -s. From stdin: Reading will continue until a newline (or until the maximum input size is reached), with the trailing newline stripped. The maximum input size is defined by the same compiled-in default as for the maximum key file size and can be overwritten using --keyfile-size option. The data read will be hashed with the default hash or the hash specified with --hash. The hash result will be truncated to the key size of the used cipher, or the size specified with -s. Note that if --key-file=- is used for reading the key from stdin, trailing newlines are not stripped from the input. If "plain" is used as argument to --hash, the input data will not be hashed. Instead, it will be zero padded (if shorter than the key size) or truncated (if longer than the key size) and used directly as the binary key. This is useful for directly specifying a binary key. No warning will be given if the amount of data read from stdin is less than the key size. From a key file: It will be truncated to the key size of the used cipher or the size given by -s and directly used as a binary key. WARNING: The --hash argument is being ignored. The --hash option is usable only for stdin input in plain mode. If the key file is shorter than the key, cryptsetup will quit with an error. The maximum input size is defined by the same compiled-in default as for the maximum key file size and can be overwritten using --keyfile-size option. Passphrase processing for LUKS LUKS uses PBKDF to protect against dictionary attacks and to give some protection to low-entropy passphrases (see cryptsetup FAQ). From a terminal: The passphrase is read until the first newline and then processed by PBKDF2 without the newline character. From stdin: LUKS will read passphrases from stdin up to the first newline character or the compiled-in maximum key file length. If --keyfile-size is given, it is ignored. From key file: The complete keyfile is read up to the compiled-in maximum size. Newline characters do not terminate the input. The --keyfile-size option can be used to limit what is read. Passphrase processing: Whenever a passphrase is added to a LUKS header (luksAddKey, luksFormat), the user may specify how much the time the passphrase processing should consume. The time is used to determine the iteration count for PBKDF2 and higher times will offer better protection for low-entropy passphrases, but open will take longer to complete. For passphrases that have entropy higher than the used key length, higher iteration times will not increase security. The default setting of one or two seconds is sufficient for most practical cases. The only exception is a low-entropy passphrase used on a device with a slow CPU, as this will result in a low iteration count. On a slow device, it may be advisable to increase the iteration time using the --iter-time option in order to obtain a higher iteration count. This does slow down all later luksOpen operations accordingly. Incoherent behavior for invalid passphrases/keys LUKS checks for a valid passphrase when an encrypted partition is unlocked. The behavior of plain dm-crypt is different. It will always decrypt with the passphrase given. If the given passphrase is wrong, the device mapped by plain dm-crypt will essentially still contain encrypted data and will be unreadable. Supported ciphers, modes, hashes and key sizes The available combinations of ciphers, modes, hashes and key sizes depend on kernel support. See /proc/crypto for a list of available options. You might need to load additional kernel crypto modules in order to get more options. For the --hash option, if the crypto backend is libgcrypt, then all algorithms supported by the gcrypt library are available. For other crypto backends, some algorithms may be missing. Notes on passphrases Mathematics cant be bribed. Make sure you keep your passphrases safe. There are a few nice tricks for constructing a fallback, when suddenly out of the blue, your brain refuses to cooperate. These fallbacks need LUKS, as its only possible with LUKS to have multiple passphrases. Still, if your attacker model does not prevent it, storing your passphrase in a sealed envelope somewhere may be a good idea as well. Notes on Random Number Generators Random Number Generators (RNG) used in cryptsetup are always the kernel RNGs without any modifications or additions to data stream produced. There are two types of randomness cryptsetup/LUKS needs. One type (which always uses /dev/urandom) is used for salts, the AF splitter and for wiping deleted keyslots. The second type is used for the volume key. You can switch between using /dev/random and /dev/urandom here, see --use-random and --use-urandom options. Using /dev/random on a system without enough entropy sources can cause luksFormat to block until the requested amount of random data is gathered. In a low-entropy situation (embedded system), this can take a very long time and potentially forever. At the same time, using /dev/urandom in a low-entropy situation will produce low-quality keys. This is a serious problem, but solving it is out of scope for a mere man-page. See urandom(4) for more information. Authenticated disk encryption (EXPERIMENTAL) Since Linux kernel version 4.12 dm-crypt supports authenticated disk encryption. Normal disk encryption modes are length-preserving (plaintext sector is of the same size as a ciphertext sector) and can provide only confidentiality protection, but not cryptographically sound data integrity protection. Authenticated modes require additional space per-sector for authentication tag and use Authenticated Encryption with Additional Data (AEAD) algorithms. If you configure LUKS2 device with data integrity protection, there will be an underlying dm-integrity device, which provides additional per-sector metadata space and also provide data journal protection to ensure atomicity of data and metadata update. Because there must be additional space for metadata and journal, the available space for the device will be smaller than for length-preserving modes. The dm-crypt device then resides on top of such a dm-integrity device. All activation and deactivation of this device stack is performed by cryptsetup, there is no difference in using luksOpen for integrity protected devices. If you want to format LUKS2 device with data integrity protection, use --integrity option. Since dm-integrity doesnt support discards (TRIM), dm-crypt device on top of it inherits this, so integrity protection mode doesnt support discards either. Some integrity modes requires two independent keys (key for encryption and for authentication). Both these keys are stored in one LUKS keyslot. WARNING: All support for authenticated modes is experimental and there are only some modes available for now. Note that there are a very few authenticated encryption algorithms that are suitable for disk encryption. You also cannot use CRC32 or any other non-cryptographic checksums (other than the special integrity mode "none"). If for some reason you want to have integrity control without using authentication mode, then you should separately configure dm-integrity independently of LUKS2. Notes on loopback device use Cryptsetup is usually used directly on a block device (disk partition or LVM volume). However, if the device argument is a file, cryptsetup tries to allocate a loopback device and map it into this file. This mode requires Linux kernel 2.6.25 or more recent which supports the loop autoclear flag (loop device is cleared on the last close automatically). Of course, you can always map a file to a loop-device manually. See the cryptsetup FAQ for an example. When device mapping is active, you can see the loop backing file in the status command output. Also see losetup(8). LUKS2 header locking The LUKS2 on-disk metadata is updated in several steps and to achieve proper atomic update, there is a locking mechanism. For an image in file, code uses flock(2) system call. For a block device, lock is performed over a special file stored in a locking directory (by default /run/cryptsetup). The locking directory should be created with the proper security context by the distribution during the boot-up phase. Only LUKS2 uses locks, other formats do not use this mechanism. LUKS on-disk format specification For LUKS on-disk metadata specification see LUKS1 <https://gitlab.com/cryptsetup/cryptsetup/wikis/Specification> and LUKS2 <https://gitlab.com/cryptsetup/LUKS2-docs>. AUTHORS top Cryptsetup is originally written by Jana Saout <jana@saout.de>. The LUKS extensions and original man page were written by Clemens Fruhwirth <clemens@endorphin.org>. Man page extensions by Milan Broz <gmazyland@gmail.com>. Man page rewrite and extension by Arno Wagner <arno@wagner.name>. REPORTING BUGS top Report bugs at cryptsetup mailing list <cryptsetup@lists.linux.dev> or in Issues project section <https://gitlab.com/cryptsetup/cryptsetup/-/issues/new>. Please attach output of the failed command with --debug option added. SEE ALSO top Cryptsetup FAQ <https://gitlab.com/cryptsetup/cryptsetup/wikis/FrequentlyAskedQuestions> cryptsetup(8), integritysetup(8) and veritysetup(8) CRYPTSETUP top Part of cryptsetup project <https://gitlab.com/cryptsetup/cryptsetup/>. This page is part of the Cryptsetup ((open-source disk encryption)) project. Information about the project can be found at https://gitlab.com/cryptsetup/cryptsetup. If you have a bug report for this manual page, send it to dm-crypt@saout.de. This page was obtained from the project's upstream Git repository https://gitlab.com/cryptsetup/cryptsetup.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-20.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org cryptsetup 2.6.1-git 2022-12-14 CRYPTSETUP(8) Pages that refer to this page: homectl(1), systemd-cryptenroll(1), crypttab(5), cryptsetup(8), cryptsetup-benchmark(8), cryptsetup-bitlkDump(8), cryptsetup-close(8), cryptsetup-config(8), cryptsetup-convert(8), cryptsetup-erase(8), cryptsetup-fvault2Dump(8), cryptsetup-isLuks(8), cryptsetup-luksAddKey(8), cryptsetup-luksChangeKey(8), cryptsetup-luksConvertKey(8), cryptsetup-luksDump(8), cryptsetup-luksFormat(8), cryptsetup-luksHeaderBackup(8), cryptsetup-luksHeaderRestore(8), cryptsetup-luksKillSlot(8), cryptsetup-luksRemoveKey(8), cryptsetup-luksResume(8), cryptsetup-luksSuspend(8), cryptsetup-luksUUID(8), cryptsetup-open(8), cryptsetup-reencrypt(8), cryptsetup-refresh(8), cryptsetup-repair(8), cryptsetup-resize(8), cryptsetup-ssh(8), cryptsetup-status(8), cryptsetup-tcryptDump(8), cryptsetup-token(8), fsadm(8), integritysetup(8), losetup(8), systemd-cryptsetup(8), systemd-cryptsetup-generator(8), systemd-gpt-auto-generator(8), systemd-makefs@.service(8), veritysetup(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # cryptsetup\n\n> Manage plain dm-crypt and LUKS (Linux Unified Key Setup) encrypted volumes.\n> More information: <https://gitlab.com/cryptsetup/cryptsetup/>.\n\n- Initialize a LUKS volume (overwrites all data on the partition):\n\n`cryptsetup luksFormat {{/dev/sda1}}`\n\n- Open a LUKS volume and create a decrypted mapping at `/dev/mapper/target`:\n\n`cryptsetup luksOpen {{/dev/sda1}} {{target}}`\n\n- Remove an existing mapping:\n\n`cryptsetup luksClose {{target}}`\n\n- Change the LUKS volume's passphrase:\n\n`cryptsetup luksChangeKey {{/dev/sda1}}`\n |
csplit | csplit(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training csplit(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | AUTHOR | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON CSPLIT(1) User Commands CSPLIT(1) NAME top csplit - split a file into sections determined by context lines SYNOPSIS top csplit [OPTION]... FILE PATTERN... DESCRIPTION top Output pieces of FILE separated by PATTERN(s) to files 'xx00', 'xx01', ..., and output byte counts of each piece to standard output. Read standard input if FILE is - Mandatory arguments to long options are mandatory for short options too. -b, --suffix-format=FORMAT use sprintf FORMAT instead of %02d -f, --prefix=PREFIX use PREFIX instead of 'xx' -k, --keep-files do not remove output files on errors --suppress-matched suppress the lines matching PATTERN -n, --digits=DIGITS use specified number of digits instead of 2 -s, --quiet, --silent do not print counts of output file sizes -z, --elide-empty-files suppress empty output files --help display this help and exit --version output version information and exit Each PATTERN may be: INTEGER copy up to but not including specified line number /REGEXP/[OFFSET] copy up to but not including a matching line %REGEXP%[OFFSET] skip to, but not including a matching line {INTEGER} repeat the previous pattern specified number of times {*} repeat the previous pattern as many times as possible A line OFFSET is an integer optionally preceded by '+' or '-' AUTHOR top Written by Stuart Kemp and David MacKenzie. REPORTING BUGS top GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT top Copyright 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top Full documentation <https://www.gnu.org/software/coreutils/csplit> or available locally via: info '(coreutils) csplit invocation' COLOPHON top This page is part of the coreutils (basic file, shell and text manipulation utilities) project. Information about the project can be found at http://www.gnu.org/software/coreutils/. If you have a bug report for this manual page, see http://www.gnu.org/software/coreutils/. This page was obtained from the tarball coreutils-9.4.tar.xz fetched from http://ftp.gnu.org/gnu/coreutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU coreutils 9.4 August 2023 CSPLIT(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # csplit\n\n> Split a file into pieces.\n> This generates files named "xx00", "xx01", and so on.\n> More information: <https://www.gnu.org/software/coreutils/csplit>.\n\n- Split a file at lines 5 and 23:\n\n`csplit {{path/to/file}} 5 23`\n\n- Split a file every 5 lines (this will fail if the total number of lines is not divisible by 5):\n\n`csplit {{path/to/file}} 5 {*}`\n\n- Split a file every 5 lines, ignoring exact-division error:\n\n`csplit -k {{path/to/file}} 5 {*}`\n\n- Split a file at line 5 and use a custom prefix for the output files:\n\n`csplit {{path/to/file}} 5 -f {{prefix}}`\n\n- Split a file at a line matching a regular expression:\n\n`csplit {{path/to/file}} /{{regular_expression}}/`\n |
ctags | ctags(1p) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training ctags(1p) Linux manual page PROLOG | NAME | SYNOPSIS | DESCRIPTION | OPTIONS | OPERANDS | STDIN | INPUT FILES | ENVIRONMENT VARIABLES | ASYNCHRONOUS EVENTS | STDOUT | STDERR | OUTPUT FILES | EXTENDED DESCRIPTION | EXIT STATUS | CONSEQUENCES OF ERRORS | APPLICATION USAGE | EXAMPLES | RATIONALE | FUTURE DIRECTIONS | SEE ALSO | COPYRIGHT CTAGS(1P) POSIX Programmer's Manual CTAGS(1P) PROLOG top This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux. NAME top ctags create a tags file (DEVELOPMENT, FORTRAN) SYNOPSIS top ctags [-a] [-f tagsfile] pathname... ctags -x pathname... DESCRIPTION top The ctags utility shall be provided on systems that support the the Software Development Utilities option, and either or both of the C-Language Development Utilities option and FORTRAN Development Utilities option. On other systems, it is optional. The ctags utility shall write a tagsfile or an index of objects from C-language or FORTRAN source files specified by the pathname operands. The tagsfile shall list the locators of language- specific objects within the source files. A locator consists of a name, pathname, and either a search pattern or a line number that can be used in searching for the object definition. The objects that shall be recognized are specified in the EXTENDED DESCRIPTION section. OPTIONS top The ctags utility shall conform to the Base Definitions volume of POSIX.12017, Section 12.2, Utility Syntax Guidelines. The following options shall be supported: -a Append to tagsfile. -f tagsfile Write the object locator lists into tagsfile instead of the default file named tags in the current directory. -x Produce a list of object names, the line number, and filename in which each is defined, as well as the text of that line, and write this to the standard output. A tagsfile shall not be created when -x is specified. OPERANDS top The following pathname operands are supported: file.c Files with basenames ending with the .c suffix shall be treated as C-language source code. Such files that are not valid input to c99 produce unspecified results. file.h Files with basenames ending with the .h suffix shall be treated as C-language source code. Such files that are not valid input to c99 produce unspecified results. file.f Files with basenames ending with the .f suffix shall be treated as FORTRAN-language source code. Such files that are not valid input to fort77 produce unspecified results. The handling of other files is implementation-defined. STDIN top See the INPUT FILES section. INPUT FILES top The input files shall be text files containing source code in the language indicated by the operand filename suffixes. ENVIRONMENT VARIABLES top The following environment variables shall affect the execution of ctags: LANG Provide a default value for the internationalization variables that are unset or null. (See the Base Definitions volume of POSIX.12017, Section 8.2, Internationalization Variables for the precedence of internationalization variables used to determine the values of locale categories.) LC_ALL If set to a non-empty string value, override the values of all the other internationalization variables. LC_COLLATE Determine the order in which output is sorted for the -x option. The POSIX locale determines the order in which the tagsfile is written. LC_CTYPE Determine the locale for the interpretation of sequences of bytes of text data as characters (for example, single-byte as opposed to multi-byte characters in arguments and input files). When processing C-language source code, if the locale is not compatible with the C locale described by the ISO C standard, the results are unspecified. LC_MESSAGES Determine the locale that should be used to affect the format and contents of diagnostic messages written to standard error. NLSPATH Determine the location of message catalogs for the processing of LC_MESSAGES. ASYNCHRONOUS EVENTS top Default. STDOUT top The list of object name information produced by the -x option shall be written to standard output in the following format: "%s %d %s %s", <object-name>, <line-number>, <filename>, <text> where <text> is the text of line <line-number> of file <filename>. STDERR top The standard error shall be used only for diagnostic messages. OUTPUT FILES top When the -x option is not specified, the format of the output file shall be: "%s\t%s\t/%s/\n", <identifier>, <filename>, <pattern> where <pattern> is a search pattern that could be used by an editor to find the defining instance of <identifier> in <filename> (where defining instance is indicated by the declarations listed in the EXTENDED DESCRIPTION). An optional <circumflex> ('^') can be added as a prefix to <pattern>, and an optional <dollar-sign> can be appended to <pattern> to indicate that the pattern is anchored to the beginning (end) of a line of text. Any <slash> or <backslash> characters in <pattern> shall be preceded by a <backslash> character. The anchoring <circumflex>, <dollar-sign>, and escaping <backslash> characters shall not be considered part of the search pattern. All other characters in the search pattern shall be considered literal characters. An alternative format is: "%s\t%s\t?%s?\n", <identifier>, <filename>, <pattern> which is identical to the first format except that <slash> characters in <pattern> shall not be preceded by escaping <backslash> characters, and <question-mark> characters in <pattern> shall be preceded by <backslash> characters. A second alternative format is: "%s\t%s\t%d\n", <identifier>, <filename>, <lineno> where <lineno> is a decimal line number that could be used by an editor to find <identifier> in <filename>. Neither alternative format shall be produced by ctags when it is used as described by POSIX.12008, but the standard utilities that process tags files shall be able to process those formats as well as the first format. In any of these formats, the file shall be sorted by identifier, based on the collation sequence in the POSIX locale. EXTENDED DESCRIPTION top If the operand identifies C-language source, the ctags utility shall attempt to produce an output line for each of the following objects: * Function definitions * Type definitions * Macros with arguments It may also produce output for any of the following objects: * Function prototypes * Structures * Unions * Global variable definitions * Enumeration types * Macros without arguments * #define statements * #line statements Any #if and #ifdef statements shall produce no output. The tag main is treated specially in C programs. The tag formed shall be created by prefixing M to the name of the file, with the trailing .c, and leading pathname components (if any) removed. On systems that do not support the C-Language Development Utilities option, ctags produces unspecified results for C- language source code files. It should write to standard error a message identifying this condition and cause a non-zero exit status to be produced. If the operand identifies FORTRAN source, the ctags utility shall produce an output line for each function definition. It may also produce output for any of the following objects: * Subroutine definitions * COMMON statements * PARAMETER statements * DATA and BLOCK DATA statements * Statement numbers On systems that do not support the FORTRAN Development Utilities option, ctags produces unspecified results for FORTRAN source code files. It should write to standard error a message identifying this condition and cause a non-zero exit status to be produced. It is implementation-defined what other objects (including duplicate identifiers) produce output. EXIT STATUS top The following exit values shall be returned: 0 Successful completion. >0 An error occurred. CONSEQUENCES OF ERRORS top Default. The following sections are informative. APPLICATION USAGE top The output with -x is meant to be a simple index that can be written out as an off-line readable function index. If the input files to ctags (such as .c files) were not created using the same locale as that in effect when ctags -x is run, results might not be as expected. The description of C-language processing says ``attempts to'' because the C language can be greatly confused, especially through the use of #defines, and this utility would be of no use if the real C preprocessor were run to identify them. The output from ctags may be fooled and incorrect for various constructs. EXAMPLES top None. RATIONALE top The option list was significantly reduced from that provided by historical implementations. The -F option was omitted as redundant, since it is the default. The -B option was omitted as being of very limited usefulness. The -t option was omitted since the recognition of typedefs is now required for C source files. The -u option was omitted because the update function was judged to be not only inefficient, but also rarely needed. An early proposal included a -w option to suppress warning diagnostics. Since the types of such diagnostics could not be described, the option was omitted as being not useful. The text for LC_CTYPE about compatibility with the C locale acknowledges that the ISO C standard imposes requirements on the locale used to process C source. This could easily be a superset of that known as ``the C locale'' by way of implementation extensions, or one of a few alternative locales for systems supporting different codesets. No statement is made for FORTRAN because the ANSI X3.91978 standard (FORTRAN 77) does not (yet) define a similar locale concept. However, a general rule in this volume of POSIX.12017 is that any time that locales do not match (preparing a file for one locale and processing it in another), the results are suspect. The collation sequence of the tags file is not affected by LC_COLLATE because it is typically not used by human readers, but only by programs such as vi to locate the tag within the source files. Using the POSIX locale eliminates some of the problems of coordinating locales between the ctags file creator and the vi file reader. Historically, the tags file has been used only by ex and vi. However, the format of the tags file has been published to encourage other programs to use the tags in new ways. The format allows either patterns or line numbers to find the identifiers because the historical vi recognizes either. The ctags utility does not produce the format using line numbers because it is not useful following any source file changes that add or delete lines. The documented search patterns match historical practice. It should be noted that literal leading <circumflex> or trailing <dollar-sign> characters in the search pattern will only behave correctly if anchored to the beginning of the line or end of the line by an additional <circumflex> or <dollar-sign> character. Historical implementations also understand the objects used by the languages Pascal and sometimes LISP, and they understand the C source output by lex and yacc. The ctags utility is not required to accommodate these languages, although implementors are encouraged to do so. The following historical option was not specified, as vgrind is not included in this volume of POSIX.12017: -v If the -v flag is given, an index of the form expected by vgrind is produced on the standard output. This listing contains the function name, filename, and page number (assuming 64-line pages). Since the output is sorted into lexicographic order, it may be desired to run the output through sort -f. Sample use: ctags -v files | sort -f > index vgrind -x index The special treatment of the tag main makes the use of ctags practical in directories with more than one program. FUTURE DIRECTIONS top None. SEE ALSO top c99(1p), fort77(1p), vi(1p) The Base Definitions volume of POSIX.12017, Chapter 8, Environment Variables, Section 12.2, Utility Syntax Guidelines COPYRIGHT top Portions of this text are reprinted and reproduced in electronic form from IEEE Std 1003.1-2017, Standard for Information Technology -- Portable Operating System Interface (POSIX), The Open Group Base Specifications Issue 7, 2018 Edition, Copyright (C) 2018 by the Institute of Electrical and Electronics Engineers, Inc and The Open Group. In the event of any discrepancy between this version and the original IEEE and The Open Group Standard, the original IEEE and The Open Group Standard is the referee document. The original Standard can be obtained online at http://www.opengroup.org/unix/online.html . Any typographical or formatting errors that appear in this page are most likely to have been introduced during the conversion of the source files to man page format. To report such errors, see https://www.kernel.org/doc/man-pages/reporting_bugs.html . IEEE/The Open Group 2017 CTAGS(1P) Pages that refer to this page: ex(1p), more(1p) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # ctags\n\n> Generates an index (or tag) file of language objects found in source files for many popular programming languages.\n> More information: <https://ctags.io/>.\n\n- Generate tags for a single file, and output them to a file named "tags" in the current directory, overwriting the file if it exists:\n\n`ctags {{path/to/file}}`\n\n- Generate tags for all files in the current directory, and output them to a specific file, overwriting the file if it exists:\n\n`ctags -f {{path/to/file}} *`\n\n- Generate tags for all files in the current directory and all subdirectories:\n\n`ctags --recurse`\n\n- Generate tags for a single file, and output them with start line number and end line number in JSON format:\n\n`ctags --fields=+ne --output-format=json {{path/to/file}}`\n |
ctrlaltdel | ctrlaltdel(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training ctrlaltdel(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | FILES | AUTHORS | SEE ALSO | REPORTING BUGS | AVAILABILITY CTRLALTDEL(8) System Administration CTRLALTDEL(8) NAME top ctrlaltdel - set the function of the Ctrl-Alt-Del combination SYNOPSIS top ctrlaltdel hard|soft DESCRIPTION top Based on examination of the linux/kernel/reboot.c code, it is clear that there are two supported functions that the <Ctrl-Alt-Del> sequence can perform. hard Immediately reboot the computer without calling sync(2) and without any other preparation. This is the default. soft Make the kernel send the SIGINT (interrupt) signal to the init process (this is always the process with PID 1). If this option is used, the init(8) program must support this feature. Since there are now several init(8) programs in the Linux community, please consult the documentation for the version that you are currently using. When the command is run without any argument, it will display the current setting. The function of ctrlaltdel is usually set in the /etc/rc.local file. OPTIONS top -h, --help Display help text and exit. -V, --version Print version and exit. FILES top /etc/rc.local AUTHORS top Peter Orbaek <poe@daimi.aau.dk> SEE ALSO top init(8), systemd(1) REPORTING BUGS top For bug reports, use the issue tracker at https://github.com/util-linux/util-linux/issues. AVAILABILITY top The ctrlaltdel command is part of the util-linux package which can be downloaded from Linux Kernel Archive <https://www.kernel.org/pub/linux/utils/util-linux/>. This page is part of the util-linux (a random collection of Linux utilities) project. Information about the project can be found at https://www.kernel.org/pub/linux/utils/util-linux/. If you have a bug report for this manual page, send it to util-linux@vger.kernel.org. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/utils/util-linux/util-linux.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-14.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org util-linux 2.39.594-1e0ad 2023-07-19 CTRLALTDEL(8) Pages that refer to this page: reboot(2) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # ctrlaltdel\n\n> Utility to control what happens when CTRL+ALT+DEL is pressed.\n> More information: <https://manned.org/ctrlaltdel>.\n\n- Get current setting:\n\n`ctrlaltdel`\n\n- Set CTRL+ALT+DEL to reboot immediately, without any preparation:\n\n`sudo ctrlaltdel hard`\n\n- Set CTRL+ALT+DEL to reboot "normally", giving processes a chance to exit first (send SIGINT to PID1):\n\n`sudo ctrlaltdel soft`\n |
cups | cups(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training cups(1) Linux manual page NAME | DESCRIPTION | ENVIRONMENT | FILES | CONFORMING TO | NOTES | SEE ALSO | COPYRIGHT | COLOPHON cups(1) Apple Inc. cups(1) NAME top cups - a standards-based, open source printing system DESCRIPTION top CUPS is the software you use to print from applications like word processors, email readers, photo editors, and web browsers. It converts the page descriptions produced by your application (put a paragraph here, draw a line there, and so forth) into something your printer can understand and then sends the information to the printer for printing. Now, since every printer manufacturer does things differently, printing can be very complicated. CUPS does its best to hide this from you and your application so that you can concentrate on printing and less on how to print. Generally, the only time you need to know anything about your printer is when you use it for the first time, and even then CUPS can often figure things out on its own. HOW DOES IT WORK? The first time you print to a printer, CUPS creates a queue to keep track of the current status of the printer (everything OK, out of paper, etc.) and any pages you have printed. Most of the time the queue points to a printer connected directly to your computer via a USB port, however it can also point to a printer on your network, a printer on the Internet, or multiple printers depending on the configuration. Regardless of where the queue points, it will look like any other printer to you and your applications. Every time you print something, CUPS creates a job which contains the queue you are sending the print to, the name of the document you are printing, and the page descriptions. Job are numbered (queue-1, queue-2, and so forth) so you can monitor the job as it is printed or cancel it if you see a mistake. When CUPS gets a job for printing, it determines the best programs (filters, printer drivers, port monitors, and backends) to convert the pages into a printable format and then runs them to actually print the job. When the print job is completely printed, CUPS removes the job from the queue and moves on to any other jobs you have submitted. You can also be notified when the job is finished, or if there are any errors during printing, in several different ways. WHERE DO I BEGIN? The easiest way to start is by using the web interface to configure your printer. Go to "http://localhost:631" and choose the Administration tab at the top of the page. Click/press on the Add Printer button and follow the prompts. When you are asked for a username and password, enter your login username and password or the "root" username and password. After the printer is added you will be asked to set the default printer options (paper size, output mode, etc.) for the printer. Make any changes as needed and then click/press on the Set Default Options button to save them. Some printers also support auto-configuration - click/press on the Query Printer for Default Options button to update the options automatically. Once you have added the printer, you can print to it from any application. You can also choose Print Test Page from the maintenance menu to print a simple test page and verify that everything is working properly. You can also use the lpadmin(8) and lpinfo(8) commands to add printers to CUPS. Additionally, your operating system may include graphical user interfaces or automatically create printer queues when you connect a printer to your computer. HOW DO I GET HELP? The CUPS web site (http://www.CUPS.org) provides access to the cups and cups-devel mailing lists, additional documentation and resources, and a bug report database. Most vendors also provide online discussion forums to ask printing questions for your operating system of choice. ENVIRONMENT top CUPS commands use the following environment variables to override the default locations of files and so forth. For security reasons, these environment variables are ignored for setuid programs: CUPS_ANYROOT Whether to allow any X.509 certificate root (Y or N). CUPS_CACHEDIR The directory where semi-persistent cache files can be found. CUPS_DATADIR The directory where data files can be found. CUPS_ENCRYPTION The default level of encryption (Always, IfRequested, Never, Required). CUPS_EXPIREDCERTS Whether to allow expired X.509 certificates (Y or N). CUPS_GSSSERVICENAME The Kerberos service name used for authentication. CUPS_SERVER The hostname/IP address and port number of the CUPS scheduler (hostname:port or ipaddress:port). CUPS_SERVERBIN The directory where server helper programs, filters, backend, etc. can be found. CUPS_SERVERROOT The root directory of the server. CUPS_STATEDIR The directory where state files can be found. CUPS_USER Specifies the name of the user for print requests. HOME Specifies the home directory of the current user. IPP_PORT Specifies the default port number for IPP requests. LOCALEDIR Specifies the location of localization files. LPDEST Specifies the default print queue (System V standard). PRINTER Specifies the default print queue (Berkeley standard). TMPDIR Specifies the location of temporary files. FILES top ~/.cups/client.conf ~/.cups/lpoptions CONFORMING TO top CUPS conforms to the Internet Printing Protocol version 2.1 and implements the Berkeley and System V UNIX print commands. NOTES top CUPS printer drivers, backends, and PPD files are deprecated and will no longer be supported in a future feature release of CUPS. Printers that do not support IPP can be supported using applications such as ippeveprinter(1). SEE ALSO top cancel(1), client.conf(7), cupsctl(8), cupsd(8), lp(1), lpadmin(8), lpinfo(8), lpoptions(1), lpr(1), lprm(1), lpq(1), lpstat(1), CUPS Online Help (http://localhost:631/help), CUPS Web Site (http://www.CUPS.org), PWG Internet Printing Protocol Workgroup (http://www.pwg.org/ipp) COPYRIGHT top Copyright 2007-2019 by Apple Inc. COLOPHON top This page is part of the CUPS (a standards-based, open source printing system) project. Information about the project can be found at http://www.cups.org/. If you have a bug report for this manual page, see http://www.cups.org/. This page was obtained from the project's upstream Git repository https://github.com/apple/cups on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-10-27.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org 26 April 2019 CUPS cups(1) Pages that refer to this page: cups-config(1), ippeveprinter(1), client.conf(5), cups-files.conf(5), backend(7), filter(7), cupsd(8), cupsd-helper(8), cupsfilter(8), cups-lpd(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # CUPS\n\n> Open source printing system.\n> CUPS isn't a single command, but a set of commands.\n> More information: <https://www.cups.org/index.html>.\n\n- View documentation for running the CUPS daemon:\n\n`tldr cupsd`\n\n- View documentation for managing printers:\n\n`tldr lpadmin`\n\n- View documentation for printing files:\n\n`tldr lp`\n\n- View documentation for checking status information about the current classes, jobs, and printers:\n\n`tldr lpstat`\n\n- View documentation for cancelling print jobs:\n\n`tldr lprm`\n |
cups-config | cups-config(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training cups-config(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | EXAMPLES | DEPRECATED OPTIONS | SEE ALSO | COPYRIGHT | COLOPHON cups-config(1) Apple Inc. cups-config(1) NAME top cups-config - get cups api, compiler, directory, and link information. SYNOPSIS top cups-config --api-version cups-config --build cups-config --cflags cups-config --datadir cups-config --help cups-config --ldflags cups-config [ --image ] [ --static ] --libs cups-config --serverbin cups-config --serverroot cups-config --version DESCRIPTION top The cups-config command allows application developers to determine the necessary command-line options for the compiler and linker, as well as the installation directories for filters, configuration files, and drivers. All values are reported to the standard output. OPTIONS top The cups-config command accepts the following command-line options: --api-version Reports the current API version (major.minor). --build Reports a system-specific build number. --cflags Reports the necessary compiler options. --datadir Reports the default CUPS data directory. --help Reports the program usage message. --ldflags Reports the necessary linker options. --libs Reports the necessary libraries to link to. --serverbin Reports the default CUPS binary directory, where filters and backends are stored. --serverroot Reports the default CUPS configuration file directory. --static When used with --libs, reports the static libraries instead of the default (shared) libraries. --version Reports the full version number of the CUPS installation (major.minor.patch). EXAMPLES top Show the currently installed version of CUPS: cups-config --version Compile a simple one-file CUPS filter: cc `cups-config --cflags --ldflags` -o filter filter.c \ `cups-config --libs` DEPRECATED OPTIONS top The following options are deprecated but continue to work for backwards compatibility: --image Formerly used to add the CUPS imaging library to the list of libraries. SEE ALSO top cups(1), CUPS Online Help (http://localhost:631/help) COPYRIGHT top Copyright 2007-2019 by Apple Inc. COLOPHON top This page is part of the CUPS (a standards-based, open source printing system) project. Information about the project can be found at http://www.cups.org/. If you have a bug report for this manual page, see http://www.cups.org/. This page was obtained from the project's upstream Git repository https://github.com/apple/cups on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-10-27.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org 26 April 2019 CUPS cups-config(1) Pages that refer to this page: ippeveprinter(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # cups-config\n\n> Show technical information about your CUPS print server installation.\n> More information: <https://openprinting.github.io/cups/doc/man-cups-config.html>.\n\n- Show where CUPS is currently installed:\n\n`cups-config --serverbin`\n\n- Show the location of CUPS' configuration directory:\n\n`cups-config --serverroot`\n\n- Show the location of CUPS' data directory:\n\n`cups-config --datadir`\n\n- Display help:\n\n`cups-config --help`\n\n- Display CUPS version:\n\n`cups-config --version`\n |
cupsaccept | cupsaccept(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training cupsaccept(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | CONFORMING TO | SEE ALSO | COPYRIGHT | COLOPHON cupsaccept(8) Apple Inc. cupsaccept(8) NAME top cupsaccept/cupsreject - accept/reject jobs sent to a destination SYNOPSIS top cupsaccept [ -E ] [ -U username ] [ -h hostname[:port] ] destination(s) cupsreject [ -E ] [ -U username ] [ -h hostname[:port] ] [ -r reason ] destination(s) DESCRIPTION top The cupsaccept command instructs the printing system to accept print jobs to the specified destinations. The cupsreject command instructs the printing system to reject print jobs to the specified destinations. The -r option sets the reason for rejecting print jobs. If not specified, the reason defaults to "Reason Unknown". OPTIONS top The following options are supported by both cupsaccept and cupsreject: -E Forces encryption when connecting to the server. -U username Sets the username that is sent when connecting to the server. -h hostname[:port] Chooses an alternate server. -r "reason" Sets the reason string that is shown for a printer that is rejecting jobs. CONFORMING TO top The cupsaccept and cupsreject commands correspond to the System V printing system commands "accept" and "reject", respectively. Unlike the System V printing system, CUPS allows printer names to contain any printable character except SPACE, TAB, "/", or "#". Also, printer and class names are not case-sensitive. Finally, the CUPS versions may ask the user for an access password depending on the printing system configuration. SEE ALSO top cancel(1), cupsenable(8), lp(1), lpadmin(8), lpstat(1), CUPS Online Help (http://localhost:631/help) COPYRIGHT top Copyright 2007-2019 by Apple Inc. COLOPHON top This page is part of the CUPS (a standards-based, open source printing system) project. Information about the project can be found at http://www.cups.org/. If you have a bug report for this manual page, see http://www.cups.org/. This page was obtained from the project's upstream Git repository https://github.com/apple/cups on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-10-27.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org 26 April 2019 CUPS cupsaccept(8) Pages that refer to this page: cupsenable(8), lpadmin(8), lpc(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # cupsaccept\n\n> Accept jobs sent to destinations.\n> Note: destination is referred as a printer or a class of printers.\n> See also: `cupsreject`, `cupsenable`, `cupsdisable`, `lpstat`.\n> More information: <https://www.cups.org/doc/man-cupsaccept.html>.\n\n- Accept print jobs to the specified destinations:\n\n`cupsaccept {{destination1 destination2 ...}}`\n\n- Specify a different server:\n\n`cupsaccept -h {{server}} {{destination1 destination2 ...}}`\n |
cupsctl | cupsctl(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training cupsctl(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | EXAMPLES | KNOWN ISSUES | SEE ALSO | COPYRIGHT | COLOPHON cupsctl(8) Apple Inc. cupsctl(8) NAME top cupsctl - configure cupsd.conf options SYNOPSIS top cupsctl [ -E ] [ -U username ] [ -h server[:port] ] [ --[no-]debug-logging ] [ --[no-]remote-admin ] [ --[no-]remote-any ] [ --[no-]share-printers ] [ --[no-]user-cancel-any ] [ name=value ] DESCRIPTION top cupsctl updates or queries the cupsd.conf file for a server. When no changes are requested, the current configuration values are written to the standard output in the format "name=value", one per line. OPTIONS top The following options are recognized: -E Enables encryption on the connection to the scheduler. -U username Specifies an alternate username to use when authenticating with the scheduler. -h server[:port] Specifies the server address. --[no-]debug-logging Enables (disables) debug logging to the error_log file. --[no-]remote-admin Enables (disables) remote administration. --[no-]remote-any Enables (disables) printing from any address, e.g., the Internet. --[no-]share-printers Enables (disables) sharing of local printers with other computers. --[no-]user-cancel-any Allows (prevents) users to cancel jobs owned by others. EXAMPLES top Display the current settings: cupsctl Enable debug logging: cupsctl --debug-logging Get the current debug logging state: cupsctl | grep '^_debug_logging' | awk -F= '{print $2}' Disable printer sharing: cupsctl --no-share-printers KNOWN ISSUES top You cannot set the Listen or Port directives using cupsctl. SEE ALSO top cupsd.conf(5), cupsd(8), CUPS Online Help (http://localhost:631/help) COPYRIGHT top Copyright 2007-2019 by Apple Inc. COLOPHON top This page is part of the CUPS (a standards-based, open source printing system) project. Information about the project can be found at http://www.cups.org/. If you have a bug report for this manual page, see http://www.cups.org/. This page was obtained from the project's upstream Git repository https://github.com/apple/cups on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-10-27.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org 26 April 2019 CUPS cupsctl(8) Pages that refer to this page: cups(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # cupsctl\n\n> Update or query a server's `cupsd.conf`.\n> More information: <https://openprinting.github.io/cups/doc/man-cupsctl.html>.\n\n- Display the current configuration values:\n\n`cupsctl`\n\n- Display the configuration values of a specific server:\n\n`cupsctl -h {{server[:port]}}`\n\n- Enable encryption on the connection to the scheduler:\n\n`cupsctl -E`\n\n- Enable or disable debug logging to the `error_log` file:\n\n`cupsctl {{--debug-logging|--no-debug-logging}}`\n\n- Enable or disable remote administration:\n\n`cupsctl {{--remote-admin|--no-remote-admin}}`\n\n- Parse the current debug logging state:\n\n`cupsctl | grep '^_debug_logging' | awk -F= '{print $2}'`\n |
cupsd | cupsd(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training cupsd(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | FILES | CONFORMING TO | EXAMPLES | SEE ALSO | COPYRIGHT | COLOPHON cupsd(8) Apple Inc. cupsd(8) NAME top cupsd - cups scheduler SYNOPSIS top cupsd [ -c cupsd.conf ] [ -f ] [ -F ] [ -h ] [ -l ] [ -s cups- files.conf ] [ -t ] DESCRIPTION top cupsd is the scheduler for CUPS. It implements a printing system based upon the Internet Printing Protocol, version 2.1, and supports most of the requirements for IPP Everywhere. If no options are specified on the command-line then the default configuration file /etc/cups/cupsd.conf will be used. OPTIONS top -c cupsd.conf Uses the named cupsd.conf configuration file. -f Run cupsd in the foreground; the default is to run in the background as a "daemon". -F Run cupsd in the foreground but detach the process from the controlling terminal and current directory. This is useful for running cupsd from init(8). -h Shows the program usage. -l This option is passed to cupsd when it is run from launchd(8) or systemd(8). -s cups-files.conf Uses the named cups-files.conf configuration file. -t Test the configuration file for syntax errors. FILES top /etc/cups/classes.conf /etc/cups/cups-files.conf /etc/cups/cupsd.conf /usr/share/cups/mime/mime.convs /usr/share/cups/mime/mime.types /etc/cups/printers.conf /etc/cups/subscriptions.conf CONFORMING TO top cupsd implements all of the required IPP/2.1 attributes and operations. It also implements several CUPS-specific administrative operations. EXAMPLES top Run cupsd in the background with the default configuration file: cupsd Test a configuration file called test.conf: cupsd -t -c test.conf Run cupsd in the foreground with a test configuration file called test.conf: cupsd -f -c test.conf SEE ALSO top backend(7), classes.conf(5), cups(1), cups-files.conf(5), cups-lpd(8), cupsd.conf(5), cupsd-helper(8), cupsd-logs(8), filter(7), launchd(8), mime.convs(5), mime.types(5), printers.conf(5), systemd(8), CUPS Online Help (http://localhost:631/help) COPYRIGHT top Copyright 2007-2019 by Apple Inc. COLOPHON top This page is part of the CUPS (a standards-based, open source printing system) project. Information about the project can be found at http://www.cups.org/. If you have a bug report for this manual page, see http://www.cups.org/. This page was obtained from the project's upstream Git repository https://github.com/apple/cups on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-10-27.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org 26 April 2019 CUPS cupsd(8) Pages that refer to this page: cups(1), classes.conf(5), cupsd.conf(5), cupsd-logs(5), cups-files.conf(5), mailto.conf(5), mime.convs(5), mime.types(5), printers.conf(5), subscriptions.conf(5), backend(7), filter(7), notifier(7), cupsctl(8), cupsd-helper(8), cups-lpd(8), cups-snmp(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # cupsd\n\n> Server daemon for the CUPS print server.\n> More information: <https://openprinting.github.io/cups/doc/man-cupsd.html>.\n\n- Start `cupsd` in the background, aka. as a daemon:\n\n`cupsd`\n\n- Start `cupsd` on the [f]oreground:\n\n`cupsd -f`\n\n- [l]aunch `cupsd` on-demand (commonly used by `launchd` or `systemd`):\n\n`cupsd -l`\n\n- Start `cupsd` using the specified [`c`]`upsd.conf` configuration file:\n\n`cupsd -c {{path/to/cupsd.conf}}`\n\n- Start `cupsd` using the specified `cups-file`[`s`]`.conf` configuration file:\n\n`cupsd -s {{path/to/cups-files.conf}}`\n\n- [t]est the [`c`]`upsd.conf` configuration file for errors:\n\n`cupsd -t -c {{path/to/cupsd.conf}}`\n\n- [t]est the `cups-file`[`s`]`.conf` configuration file for errors:\n\n`cupsd -t -s {{path/to/cups-files.conf}}`\n\n- Display help:\n\n`cupsd -h`\n |
cupsdisable | cupsenable(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training cupsenable(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | CONFORMING TO | SEE ALSO | COPYRIGHT | COLOPHON cupsenable(8) Apple Inc. cupsenable(8) NAME top cupsdisable, cupsenable - stop/start printers and classes SYNOPSIS top cupsdisable [ -E ] [ -U username ] [ -c ] [ -h server[:port] ] [ -r reason ] [ --hold ] destination(s) cupsenable [ -E ] [ -U username ] [ -c ] [ -h server[:port] ] [ --release ] destination(s) DESCRIPTION top cupsenable starts the named printers or classes while cupsdisable stops the named printers or classes. OPTIONS top The following options may be used: -E Forces encryption of the connection to the server. -U username Uses the specified username when connecting to the server. -c Cancels all jobs on the named destination. -h server[:port] Uses the specified server and port. --hold Holds remaining jobs on the named printer. Useful for allowing the current job to complete before performing maintenance. -r "reason" Sets the message associated with the stopped state. If no reason is specified then the message is set to "Reason Unknown". --release Releases pending jobs for printing. Use after running cupsdisable with the --hold option to resume printing. CONFORMING TO top Unlike the System V printing system, CUPS allows printer names to contain any printable character except SPACE, TAB, "/", or "#". Also, printer and class names are not case-sensitive. The System V versions of these commands are disable and enable, respectively. They have been renamed to avoid conflicts with the bash(1) build-in commands of the same names. The CUPS versions of disable and enable may ask the user for an access password depending on the printing system configuration. This differs from the System V versions which require the root user to execute these commands. SEE ALSO top cupsaccept(8), cupsreject(8), cancel(1), lp(1), lpadmin(8), lpstat(1), CUPS Online Help (http://localhost:631/help) COPYRIGHT top Copyright 2007-2019 by Apple Inc. COLOPHON top This page is part of the CUPS (a standards-based, open source printing system) project. Information about the project can be found at http://www.cups.org/. If you have a bug report for this manual page, see http://www.cups.org/. This page was obtained from the project's upstream Git repository https://github.com/apple/cups on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-10-27.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org 26 April 2019 CUPS cupsenable(8) Pages that refer to this page: cupsaccept(8), lpadmin(8), lpc(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # cupsdisable\n\n> Stop printers and classes.\n> Note: destination is referred as a printer or a class of printers.\n> See also: `cupsenable`, `cupsaccept`, `cupsreject`, `lpstat`.\n> More information: <https://openprinting.github.io/cups/doc/man-cupsenable.html>.\n\n- Stop one or more destination(s):\n\n`cupsdisable {{destination1 destination2 ...}}`\n\n- Cancel all jobs of the specified destination(s):\n\n`cupsdisable -c {{destination1 destination2 ...}}`\n |
cupsenable | cupsenable(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training cupsenable(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | CONFORMING TO | SEE ALSO | COPYRIGHT | COLOPHON cupsenable(8) Apple Inc. cupsenable(8) NAME top cupsdisable, cupsenable - stop/start printers and classes SYNOPSIS top cupsdisable [ -E ] [ -U username ] [ -c ] [ -h server[:port] ] [ -r reason ] [ --hold ] destination(s) cupsenable [ -E ] [ -U username ] [ -c ] [ -h server[:port] ] [ --release ] destination(s) DESCRIPTION top cupsenable starts the named printers or classes while cupsdisable stops the named printers or classes. OPTIONS top The following options may be used: -E Forces encryption of the connection to the server. -U username Uses the specified username when connecting to the server. -c Cancels all jobs on the named destination. -h server[:port] Uses the specified server and port. --hold Holds remaining jobs on the named printer. Useful for allowing the current job to complete before performing maintenance. -r "reason" Sets the message associated with the stopped state. If no reason is specified then the message is set to "Reason Unknown". --release Releases pending jobs for printing. Use after running cupsdisable with the --hold option to resume printing. CONFORMING TO top Unlike the System V printing system, CUPS allows printer names to contain any printable character except SPACE, TAB, "/", or "#". Also, printer and class names are not case-sensitive. The System V versions of these commands are disable and enable, respectively. They have been renamed to avoid conflicts with the bash(1) build-in commands of the same names. The CUPS versions of disable and enable may ask the user for an access password depending on the printing system configuration. This differs from the System V versions which require the root user to execute these commands. SEE ALSO top cupsaccept(8), cupsreject(8), cancel(1), lp(1), lpadmin(8), lpstat(1), CUPS Online Help (http://localhost:631/help) COPYRIGHT top Copyright 2007-2019 by Apple Inc. COLOPHON top This page is part of the CUPS (a standards-based, open source printing system) project. Information about the project can be found at http://www.cups.org/. If you have a bug report for this manual page, see http://www.cups.org/. This page was obtained from the project's upstream Git repository https://github.com/apple/cups on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-10-27.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org 26 April 2019 CUPS cupsenable(8) Pages that refer to this page: cupsaccept(8), lpadmin(8), lpc(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # cupsenable\n\n> Start printers and classes.\n> Note: destination is referred as a printer or a class of printers.\n> See also: `cupsdisable`, `cupsaccept`, `cupsreject`, `lpstat`.\n> More information: <https://www.cups.org/doc/man-cupsenable.html>.\n\n- Start one or more destination(s):\n\n`cupsenable {{destination1 destination2 ...}}`\n\n- Resume printing of pending jobs of a destination (use after `cupsdisable` with `--hold`):\n\n`cupsenable --release {{destination}}`\n\n- Cancel all jobs of the specified destination(s):\n\n`cupsenable -c {{destination1 destination2 ...}}`\n |
cupstestppd | cupstestppd(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training cupstestppd(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | EXIT STATUS | EXAMPLES | NOTES | SEE ALSO | COPYRIGHT | COLOPHON cupstestppd(1) Apple Inc. cupstestppd(1) NAME top cupstestppd - test conformance of ppd files SYNOPSIS top cupstestppd [ -I category ] [ -R rootdir ] [ -W category ] [ -q ] [ -r ] [ -v[v] ] filename.ppd[.gz] [ ... filename.ppd[.gz] ] cupstestppd [ -R rootdir ] [ -W category ] [ -q ] [ -r ] [ -v[v] ] - DESCRIPTION top cupstestppd tests the conformance of PPD files to the Adobe PostScript Printer Description file format specification version 4.3. It can also be used to list the supported options and available fonts in a PPD file. The results of testing and any other output are sent to the standard output. The first form of cupstestppd tests one or more PPD files on the command-line. The second form tests the PPD file provided on the standard input. OPTIONS top cupstestppd supports the following options: -I filename Ignores all PCFileName warnings. -I filters Ignores all filter errors. -I profiles Ignores all profile errors. -R rootdir Specifies an alternate root directory for the filter, pre- filter, and other support file checks. -W constraints Report all UIConstraint errors as warnings. -W defaults Except for size-related options, report all default option errors as warnings. -W filters Report all filter errors as warnings. -W profiles Report all profile errors as warnings. -W sizes Report all media size errors as warnings. -W translations Report all translation errors as warnings. -W all Report all of the previous errors as warnings. -W none Report all of the previous errors as errors. -q Specifies that no information should be displayed. -r Relaxes the PPD conformance requirements so that common whitespace, control character, and formatting problems are not treated as hard errors. -v Specifies that detailed conformance testing results should be displayed rather than the concise PASS/FAIL/ERROR status. -vv Specifies that all information in the PPD file should be displayed in addition to the detailed conformance testing results. The -q, -v, and -vv options are mutually exclusive. EXIT STATUS top cupstestppd returns zero on success and non-zero on error. The error codes are as follows: 1 Bad command-line arguments or missing PPD filename. 2 Unable to open or read PPD file. 3 The PPD file contains format errors that cannot be skipped. 4 The PPD file does not conform to the Adobe PPD specification. EXAMPLES top The following command will test all PPD files under the current directory and print the names of each file that does not conform: find . -name \*.ppd \! -exec cupstestppd -q '{}' \; -print The next command tests all PPD files under the current directory and print detailed conformance testing results for the files that do not conform: find . -name \*.ppd \! -exec cupstestppd -q '{}' \; \ -exec cupstestppd -v '{}' \; NOTES top PPD files are deprecated and will no longer be supported in a future feature release of CUPS. Printers that do not support IPP can be supported using applications such as ippeveprinter(1). SEE ALSO top lpadmin(8), CUPS Online Help (http://localhost:631/help), Adobe PostScript Printer Description File Format Specification, Version 4.3. COPYRIGHT top Copyright 2007-2019 by Apple Inc. COLOPHON top This page is part of the CUPS (a standards-based, open source printing system) project. Information about the project can be found at http://www.cups.org/. If you have a bug report for this manual page, see http://www.cups.org/. This page was obtained from the project's upstream Git repository https://github.com/apple/cups on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-10-27.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org 26 April 2019 CUPS cupstestppd(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # cupstestppd\n\n> Test conformance of PPD files to the version 4.3 of the specification.\n> Error codes (1, 2, 3 and 4, respectively): bad CLI arguments, unable to open file, unskippable format errors and non-conformance with PPD specification.\n> Note: this command is deprecated.\n> See also: `lpadmin`.\n> More information: <https://openprinting.github.io/cups/doc/man-cupstestppd.html>.\n\n- Test the conformance of one or more files in quiet mode:\n\n`cupstestppd -q {{path/to/file1.ppd path/to/file2.ppd ...}}`\n\n- Get the PPD file from `stdin`, showing detailed conformance testing results:\n\n`cupstestppd -v - < {{path/to/file.ppd}}`\n\n- Test all PPD files under the current directory, printing the names of each file that does not conform:\n\n`find . -name \*.ppd \! -execdir cupstestppd -q '{}' \; -print`\n |
curl | curl(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training curl(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | URL | GLOBBING | VARIABLES | OUTPUT | PROTOCOLS | PROGRESS METER | VERSION | OPTIONS | FILES | ENVIRONMENT | PROXY PROTOCOL PREFIXES | EXIT CODES | BUGS | AUTHORS / CONTRIBUTORS | WWW | SEE ALSO | COLOPHON curl(1) curl Manual curl(1) NAME top curl - transfer a URL SYNOPSIS top curl [options / URLs] DESCRIPTION top curl is a tool for transferring data from or to a server using URLs. It supports these protocols: DICT, FILE, FTP, FTPS, GOPHER, GOPHERS, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, MQTT, POP3, POP3S, RTMP, RTMPS, RTSP, SCP, SFTP, SMB, SMBS, SMTP, SMTPS, TELNET, TFTP, WS and WSS. curl is powered by libcurl for all transfer-related features. See libcurl(3) for details. URL top The URL syntax is protocol-dependent. You find a detailed description in RFC 3986. If you provide a URL without a leading protocol:// scheme, curl guesses what protocol you want. It then defaults to HTTP but assumes others based on often-used host name prefixes. For example, for host names starting with "ftp." curl assumes you want FTP. You can specify any amount of URLs on the command line. They are fetched in a sequential manner in the specified order unless you use -Z, --parallel. You can specify command line options and URLs mixed and in any order on the command line. curl attempts to reuse connections when doing multiple transfers, so that getting many files from the same server do not use multiple connects and setup handshakes. This improves speed. Connection reuse can only be done for URLs specified for a single command line invocation and cannot be performed between separate curl runs. Provide an IPv6 zone id in the URL with an escaped percentage sign. Like in "http://[fe80::3%25eth0]/" Everything provided on the command line that is not a command line option or its argument, curl assumes is a URL and treats it as such. GLOBBING top You can specify multiple URLs or parts of URLs by writing lists within braces or ranges within brackets. We call this "globbing". Provide a list with three different names like this: "http://site.{one,two,three}.com" or you can get sequences of alphanumeric series by using [] as in: "ftp://ftp.example.com/file[1-100].txt" "ftp://ftp.example.com/file[001-100].txt" (with leading zeros) "ftp://ftp.example.com/file[a-z].txt" Nested sequences are not supported, but you can use several ones next to each other: "http://example.com/archive[1996-1999]/vol[1-4]/part{a,b,c}.html" You can specify a step counter for the ranges to get every Nth number or letter: "http://example.com/file[1-100:10].txt" "http://example.com/file[a-z:2].txt" When using [] or {} sequences when invoked from a command line prompt, you probably have to put the full URL within double quotes to avoid the shell from interfering with it. This also goes for other characters treated special, like for example '&', '?' and '*'. Switch off globbing with -g, --globoff. VARIABLES top curl supports command line variables (added in 8.3.0). Set variables with --variable name=content or --variable name@file (where "file" can be stdin if set to a single dash (-)). Variable contents can expanded in option parameters using "{{name}}" (without the quotes) if the option name is prefixed with "--expand-". This gets the contents of the variable "name" inserted, or a blank if the name does not exist as a variable. Insert "{{" verbatim in the string by prefixing it with a backslash, like "\{{". You an access and expand environment variables by first importing them. You can select to either require the environment variable to be set or you can provide a default value in case it is not already set. Plain --variable %name imports the variable called 'name' but exits with an error if that environment variable is not already set. To provide a default value if it is not set, use --variable %name=content or --variable %name@content. Example. Get the USER environment variable into the URL, fail if USER is not set: --variable '%USER' --expand-url = "https://example.com/api/{{USER}}/method" When expanding variables, curl supports a set of functions that can make the variable contents more convenient to use. It can trim leading and trailing white space with trim, it can output the contents as a JSON quoted string with json, URL encode the string with url or base64 encode it with b64. You apply function to a variable expansion, add them colon separated to the right side of the variable. Variable content holding null bytes that are not encoded when expanded cause error. Example: get the contents of a file called $HOME/.secret into a variable called "fix". Make sure that the content is trimmed and percent-encoded sent as POST data: --variable %HOME --expand-variable fix@{{HOME}}/.secret --expand-data "{{fix:trim:url}}" https://example.com/ Command line variables and expansions were added in in 8.3.0. OUTPUT top If not told otherwise, curl writes the received data to stdout. It can be instructed to instead save that data into a local file, using the -o, --output or -O, --remote-name options. If curl is given multiple URLs to transfer on the command line, it similarly needs multiple options for where to save them. curl does not parse or otherwise "understand" the content it gets or writes as output. It does no encoding or decoding, unless explicitly asked to with dedicated command line options. PROTOCOLS top curl supports numerous protocols, or put in URL terms: schemes. Your particular build may not support them all. DICT Lets you lookup words using online dictionaries. FILE Read or write local files. curl does not support accessing file:// URL remotely, but when running on Microsoft Windows using the native UNC approach works. FTP(S) curl supports the File Transfer Protocol with a lot of tweaks and levers. With or without using TLS. GOPHER(S) Retrieve files. HTTP(S) curl supports HTTP with numerous options and variations. It can speak HTTP version 0.9, 1.0, 1.1, 2 and 3 depending on build options and the correct command line options. IMAP(S) Using the mail reading protocol, curl can "download" emails for you. With or without using TLS. LDAP(S) curl can do directory lookups for you, with or without TLS. MQTT curl supports MQTT version 3. Downloading over MQTT equals "subscribe" to a topic while uploading/posting equals "publish" on a topic. MQTT over TLS is not supported (yet). POP3(S) Downloading from a pop3 server means getting a mail. With or without using TLS. RTMP(S) The Realtime Messaging Protocol is primarily used to serve streaming media and curl can download it. RTSP curl supports RTSP 1.0 downloads. SCP curl supports SSH version 2 scp transfers. SFTP curl supports SFTP (draft 5) done over SSH version 2. SMB(S) curl supports SMB version 1 for upload and download. SMTP(S) Uploading contents to an SMTP server means sending an email. With or without TLS. TELNET Telling curl to fetch a telnet URL starts an interactive session where it sends what it reads on stdin and outputs what the server sends it. TFTP curl can do TFTP downloads and uploads. PROGRESS METER top curl normally displays a progress meter during operations, indicating the amount of transferred data, transfer speeds and estimated time left, etc. The progress meter displays the transfer rate in bytes per second. The suffixes (k, M, G, T, P) are 1024 based. For example 1k is 1024 bytes. 1M is 1048576 bytes. curl displays this data to the terminal by default, so if you invoke curl to do an operation and it is about to write data to the terminal, it disables the progress meter as otherwise it would mess up the output mixing progress meter and response data. If you want a progress meter for HTTP POST or PUT requests, you need to redirect the response output to a file, using shell redirect (>), -o, --output or similar. This does not apply to FTP upload as that operation does not spit out any response data to the terminal. If you prefer a progress "bar" instead of the regular meter, -#, --progress-bar is your friend. You can also disable the progress meter completely with the -s, --silent option. VERSION top This man page describes curl 8.6.0. If you use a later version, chances are this man page does not fully document it. If you use an earlier version, this document tries to include version information about which specific version that introduced changes. You can always learn which the latest curl version is by running curl https://curl.se/info The online version of this man page is always showing the latest incarnation: https://curl.se/docs/manpage.html OPTIONS top Options start with one or two dashes. Many of the options require an additional value next to them. If provided text does not start with a dash, it is presumed to be and treated as a URL. The short "single-dash" form of the options, -d for example, may be used with or without a space between it and its value, although a space is a recommended separator. The long "double-dash" form, -d, --data for example, requires a space between it and its value. Short version options that do not need any additional values can be used immediately next to each other, like for example you can specify all the options -O, -L and -v at once as -OLv. In general, all boolean options are enabled with --option and yet again disabled with --no-option. That is, you use the same option name but prefix it with "no-". However, in this list we mostly only list and show the --option version of them. When -:, --next is used, it resets the parser state and you start again with a clean option state, except for the options that are "global". Global options retain their values and meaning even after -:, --next. The following options are global: --fail-early, --libcurl, --parallel-immediate, -Z, --parallel, -#, --progress-bar, --rate, -S, --show-error, --stderr, --styled-output, --trace-ascii, --trace-config, --trace-ids, --trace-time, --trace and -v, --verbose. --abstract-unix-socket <path> (HTTP) Connect through an abstract Unix domain socket, instead of using the network. Note: netstat shows the path of an abstract socket prefixed with '@', however the <path> argument should not have this leading character. If --abstract-unix-socket is provided several times, the last set value is used. Example: curl --abstract-unix-socket socketpath https://example.com See also --unix-socket. Added in 7.53.0. --alt-svc <file name> (HTTPS) This option enables the alt-svc parser in curl. If the file name points to an existing alt-svc cache file, that gets used. After a completed transfer, the cache is saved to the file name again if it has been modified. Specify a "" file name (zero length) to avoid loading/saving and make curl just handle the cache in memory. If this option is used several times, curl loads contents from all the files but the last one is used for saving. --alt-svc can be used several times in a command line Example: curl --alt-svc svc.txt https://example.com See also --resolve and --connect-to. Added in 7.64.1. --anyauth (HTTP) Tells curl to figure out authentication method by itself, and use the most secure one the remote site claims to support. This is done by first doing a request and checking the response-headers, thus possibly inducing an extra network round-trip. This is used instead of setting a specific authentication method, which you can do with --basic, --digest, --ntlm, and --negotiate. Using --anyauth is not recommended if you do uploads from stdin, since it may require data to be sent twice and then the client must be able to rewind. If the need should arise when uploading from stdin, the upload operation fails. Used together with -u, --user. Providing --anyauth multiple times has no extra effect. Example: curl --anyauth --user me:pwd https://example.com See also --proxy-anyauth, --basic and --digest. -a, --append (FTP SFTP) When used in an upload, this option makes curl append to the target file instead of overwriting it. If the remote file does not exist, it is created. Note that this flag is ignored by some SFTP servers (including OpenSSH). Providing -a, --append multiple times has no extra effect. Disable it again with --no-append. Example: curl --upload-file local --append ftp://example.com/ See also -r, --range and -C, --continue-at. --aws-sigv4 <provider1[:provider2[:region[:service]]]> (HTTP) Use AWS V4 signature authentication in the transfer. The provider argument is a string that is used by the algorithm when creating outgoing authentication headers. The region argument is a string that points to a geographic area of a resources collection (region-code) when the region name is omitted from the endpoint. The service argument is a string that points to a function provided by a cloud (service-code) when the service name is omitted from the endpoint. If --aws-sigv4 is provided several times, the last set value is used. Example: curl --aws-sigv4 "aws:amz:us-east-2:es" --user "key:secret" https://example.com See also --basic and -u, --user. Added in 7.75.0. --basic (HTTP) Tells curl to use HTTP Basic authentication with the remote host. This is the default and this option is usually pointless, unless you use it to override a previously set option that sets a different authentication method (such as --ntlm, --digest, or --negotiate). Used together with -u, --user. Providing --basic multiple times has no extra effect. Example: curl -u name:password --basic https://example.com See also --proxy-basic. --ca-native (TLS) Tells curl to use the CA store from the native operating system to verify the peer. By default, curl otherwise uses a CA store provided in a single file or directory, but when using this option it interfaces the operating system's own vault. This option only works for curl on Windows when built to use OpenSSL. When curl on Windows is built to use Schannel, this feature is implied and curl then only uses the native CA store. curl built with wolfSSL also supports this option (added in 8.3.0). Providing --ca-native multiple times has no extra effect. Disable it again with --no-ca-native. Example: curl --ca-native https://example.com See also --cacert, --capath and -k, --insecure. Added in 8.2.0. --cacert <file> (TLS) Tells curl to use the specified certificate file to verify the peer. The file may contain multiple CA certificates. The certificate(s) must be in PEM format. Normally curl is built to use a default file for this, so this option is typically used to alter that default file. curl recognizes the environment variable named 'CURL_CA_BUNDLE' if it is set, and uses the given path as a path to a CA cert bundle. This option overrides that variable. The windows version of curl automatically looks for a CA certs file named 'curl-ca-bundle.crt', either in the same directory as curl.exe, or in the Current Working Directory, or in any folder along your PATH. (iOS and macOS only) If curl is built against Secure Transport, then this option is supported for backward compatibility with other SSL engines, but it should not be set. If the option is not set, then curl uses the certificates in the system and user Keychain to verify the peer, which is the preferred method of verifying the peer's certificate chain. (Schannel only) This option is supported for Schannel in Windows 7 or later (added in 7.60.0). This option is supported for backward compatibility with other SSL engines; instead it is recommended to use Windows' store of root certificates (the default for Schannel). If --cacert is provided several times, the last set value is used. Example: curl --cacert CA-file.txt https://example.com See also --capath and -k, --insecure. --capath <dir> (TLS) Tells curl to use the specified certificate directory to verify the peer. Multiple paths can be provided by separating them with ":" (e.g. "path1:path2:path3"). The certificates must be in PEM format, and if curl is built against OpenSSL, the directory must have been processed using the c_rehash utility supplied with OpenSSL. Using --capath can allow OpenSSL-powered curl to make SSL-connections much more efficiently than using --cacert if the --cacert file contains many CA certificates. If this option is set, the default capath value is ignored. If --capath is provided several times, the last set value is used. Example: curl --capath /local/directory https://example.com See also --cacert and -k, --insecure. -E, --cert <certificate[:password]> (TLS) Tells curl to use the specified client certificate file when getting a file with HTTPS, FTPS or another SSL-based protocol. The certificate must be in PKCS#12 format if using Secure Transport, or PEM format if using any other engine. If the optional password is not specified, it is queried for on the terminal. Note that this option assumes a certificate file that is the private key and the client certificate concatenated. See -E, --cert and --key to specify them independently. In the <certificate> portion of the argument, you must escape the character ":" as "\:" so that it is not recognized as the password delimiter. Similarly, you must escape the character "\" as "\\" so that it is not recognized as an escape character. If curl is built against OpenSSL library, and the engine pkcs11 is available, then a PKCS#11 URI (RFC 7512) can be used to specify a certificate located in a PKCS#11 device. A string beginning with "pkcs11:" is interpreted as a PKCS#11 URI. If a PKCS#11 URI is provided, then the --engine option is set as "pkcs11" if none was provided and the --cert-type option is set as "ENG" if none was provided. (iOS and macOS only) If curl is built against Secure Transport, then the certificate string can either be the name of a certificate/private key in the system or user keychain, or the path to a PKCS#12-encoded certificate and private key. If you want to use a file from the current directory, please precede it with "./" prefix, in order to avoid confusion with a nickname. (Schannel only) Client certificates must be specified by a path expression to a certificate store. (Loading PFX is not supported; you can import it to a store first). You can use "<store location>\<store name>\<thumbprint>" to refer to a certificate in the system certificates store, for example, "CurrentUser\MY\934a7ac6f8a5d579285a74fa61e19f23ddfe8d7a". Thumbprint is usually a SHA-1 hex string which you can see in certificate details. Following store locations are supported: CurrentUser, LocalMachine, CurrentService, Services, CurrentUserGroupPolicy, LocalMachineGroupPolicy and LocalMachineEnterprise. If -E, --cert is provided several times, the last set value is used. Example: curl --cert certfile --key keyfile https://example.com See also --cert-type, --key and --key-type. --cert-status (TLS) Tells curl to verify the status of the server certificate by using the Certificate Status Request (aka. OCSP stapling) TLS extension. If this option is enabled and the server sends an invalid (e.g. expired) response, if the response suggests that the server certificate has been revoked, or no response at all is received, the verification fails. This is currently only implemented in the OpenSSL and GnuTLS backends. Providing --cert-status multiple times has no extra effect. Disable it again with --no-cert-status. Example: curl --cert-status https://example.com See also --pinnedpubkey. --cert-type <type> (TLS) Tells curl what type the provided client certificate is using. PEM, DER, ENG and P12 are recognized types. The default type depends on the TLS backend and is usually PEM, however for Secure Transport and Schannel it is P12. If -E, --cert is a pkcs11: URI then ENG is the default type. If --cert-type is provided several times, the last set value is used. Example: curl --cert-type PEM --cert file https://example.com See also -E, --cert, --key and --key-type. --ciphers <list of ciphers> (TLS) Specifies which ciphers to use in the connection. The list of ciphers must specify valid ciphers. Read up on SSL cipher list details on this URL: https://curl.se/docs/ssl-ciphers.html If --ciphers is provided several times, the last set value is used. Example: curl --ciphers ECDHE-ECDSA-AES256-CCM8 https://example.com See also --tlsv1.3, --tls13-ciphers and --proxy-ciphers. --compressed (HTTP) Request a compressed response using one of the algorithms curl supports, and automatically decompress the content. Response headers are not modified when saved, so if they are "interpreted" separately again at a later point they might appear to be saying that the content is (still) compressed; while in fact it has already been decompressed. If this option is used and the server sends an unsupported encoding, curl reports an error. This is a request, not an order; the server may or may not deliver data compressed. Providing --compressed multiple times has no extra effect. Disable it again with --no-compressed. Example: curl --compressed https://example.com See also --compressed-ssh. --compressed-ssh (SCP SFTP) Enables built-in SSH compression. This is a request, not an order; the server may or may not do it. Providing --compressed-ssh multiple times has no extra effect. Disable it again with --no-compressed-ssh. Example: curl --compressed-ssh sftp://example.com/ See also --compressed. Added in 7.56.0. -K, --config <file> Specify a text file to read curl arguments from. The command line arguments found in the text file are used as if they were provided on the command line. Options and their parameters must be specified on the same line in the file, separated by whitespace, colon, or the equals sign. Long option names can optionally be given in the config file without the initial double dashes and if so, the colon or equals characters can be used as separators. If the option is specified with one or two dashes, there can be no colon or equals character between the option and its parameter. If the parameter contains whitespace or starts with a colon (:) or equals sign (=), it must be specified enclosed within double quotes ("). Within double quotes the following escape sequences are available: \\, \", \t, \n, \r and \v. A backslash preceding any other letter is ignored. If the first non-blank column of a config line is a '#' character, that line is treated as a comment. Only write one option per physical line in the config file. A single line is required to be no more than 10 megabytes (since 8.2.0). Specify the filename to -K, --config as '-' to make curl read the file from stdin. Note that to be able to specify a URL in the config file, you need to specify it using the --url option, and not by simply writing the URL on its own line. So, it could look similar to this: url = "https://curl.se/docs/" # --- Example file --- # this is a comment url = "example.com" output = "curlhere.html" user-agent = "superagent/1.0" # and fetch another URL too url = "example.com/docs/manpage.html" -O referer = "http://nowhereatall.example.com/" # --- End of example file --- When curl is invoked, it (unless -q, --disable is used) checks for a default config file and uses it if found, even when -K, --config is used. The default config file is checked for in the following places in this order: 1) "$CURL_HOME/.curlrc" 2) "$XDG_CONFIG_HOME/curlrc" (Added in 7.73.0) 3) "$HOME/.curlrc" 4) Windows: "%USERPROFILE%\.curlrc" 5) Windows: "%APPDATA%\.curlrc" 6) Windows: "%USERPROFILE%\Application Data\.curlrc" 7) Non-Windows: use getpwuid to find the home directory 8) On Windows, if it finds no .curlrc file in the sequence described above, it checks for one in the same dir the curl executable is placed. On Windows two filenames are checked per location: .curlrc and _curlrc, preferring the former. Older versions on Windows checked for _curlrc only. -K, --config can be used several times in a command line Example: curl --config file.txt https://example.com See also -q, --disable. --connect-timeout <fractional seconds> Maximum time in seconds that you allow curl's connection to take. This only limits the connection phase, so if curl connects within the given period it continues - if not it exits. This option accepts decimal values. The decimal value needs to be provided using a dot (.) as decimal separator - not the local version even if it might be using another separator. The connection phase is considered complete when the DNS lookup and requested TCP, TLS or QUIC handshakes are done. If --connect-timeout is provided several times, the last set value is used. Examples: curl --connect-timeout 20 https://example.com curl --connect-timeout 3.14 https://example.com See also -m, --max-time. --connect-to <HOST1:PORT1:HOST2:PORT2> For a request to the given HOST1:PORT1 pair, connect to HOST2:PORT2 instead. This option is suitable to direct requests at a specific server, e.g. at a specific cluster node in a cluster of servers. This option is only used to establish the network connection. It does NOT affect the hostname/port that is used for TLS/SSL (e.g. SNI, certificate verification) or for the application protocols. "HOST1" and "PORT1" may be the empty string, meaning "any host/port". "HOST2" and "PORT2" may also be the empty string, meaning "use the request's original host/port". A "host" specified to this option is compared as a string, so it needs to match the name used in request URL. It can be either numerical such as "127.0.0.1" or the full host name such as "example.org". --connect-to can be used several times in a command line Example: curl --connect-to example.com:443:example.net:8443 https://example.com See also --resolve and -H, --header. -C, --continue-at <offset> Continue/Resume a previous file transfer at the given offset. The given offset is the exact number of bytes that are skipped, counting from the beginning of the source file before it is transferred to the destination. If used with uploads, the FTP server command SIZE is not used by curl. Use "-C -" to tell curl to automatically find out where/how to resume the transfer. It then uses the given output/input files to figure that out. If -C, --continue-at is provided several times, the last set value is used. Examples: curl -C - https://example.com curl -C 400 https://example.com See also -r, --range. -b, --cookie <data|filename> (HTTP) Pass the data to the HTTP server in the Cookie header. It is supposedly the data previously received from the server in a "Set-Cookie:" line. The data should be in the format "NAME1=VALUE1; NAME2=VALUE2". This makes curl use the cookie header with this content explicitly in all outgoing request(s). If multiple requests are done due to authentication, followed redirects or similar, they all get this cookie passed on. If no '=' symbol is used in the argument, it is instead treated as a filename to read previously stored cookie from. This option also activates the cookie engine which makes curl record incoming cookies, which may be handy if you are using this in combination with the -L, --location option or do multiple URL transfers on the same invoke. If the file name is exactly a minus ("-"), curl instead reads the contents from stdin. The file format of the file to read cookies from should be plain HTTP headers (Set-Cookie style) or the Netscape/Mozilla cookie file format. The file specified with -b, --cookie is only used as input. No cookies are written to the file. To store cookies, use the -c, --cookie-jar option. If you use the Set-Cookie file format and do not specify a domain then the cookie is not sent since the domain never matches. To address this, set a domain in Set-Cookie line (doing that includes subdomains) or preferably: use the Netscape format. Users often want to both read cookies from a file and write updated cookies back to a file, so using both -b, --cookie and -c, --cookie-jar in the same command line is common. -b, --cookie can be used several times in a command line Examples: curl -b cookiefile https://example.com curl -b cookiefile -c cookiefile https://example.com See also -c, --cookie-jar and -j, --junk-session-cookies. -c, --cookie-jar <filename> (HTTP) Specify to which file you want curl to write all cookies after a completed operation. Curl writes all cookies from its in-memory cookie storage to the given file at the end of operations. If no cookies are known, no data is written. The file is created using the Netscape cookie file format. If you set the file name to a single dash, "-", the cookies are written to stdout. The file specified with -c, --cookie-jar is only used for output. No cookies are read from the file. To read cookies, use the -b, --cookie option. Both options can specify the same file. This command line option activates the cookie engine that makes curl record and use cookies. The -b, --cookie option also activates it. If the cookie jar cannot be created or written to, the whole curl operation does not fail or even report an error clearly. Using -v, --verbose gets a warning displayed, but that is the only visible feedback you get about this possibly lethal situation. If -c, --cookie-jar is provided several times, the last set value is used. Examples: curl -c store-here.txt https://example.com curl -c store-here.txt -b read-these https://example.com See also -b, --cookie. --create-dirs When used in conjunction with the -o, --output option, curl creates the necessary local directory hierarchy as needed. This option creates the directories mentioned with the -o, --output option combined with the path possibly set with --output-dir. If the combined output file name uses no directory, or if the directories it mentions already exist, no directories are created. Created directories are made with mode 0750 on unix style file systems. To create remote directories when using FTP or SFTP, try --ftp-create-dirs. Providing --create-dirs multiple times has no extra effect. Disable it again with --no-create-dirs. Example: curl --create-dirs --output local/dir/file https://example.com See also --ftp-create-dirs and --output-dir. --create-file-mode <mode> (SFTP SCP FILE) When curl is used to create files remotely using one of the supported protocols, this option allows the user to set which 'mode' to set on the file at creation time, instead of the default 0644. This option takes an octal number as argument. If --create-file-mode is provided several times, the last set value is used. Example: curl --create-file-mode 0777 -T localfile sftp://example.com/new See also --ftp-create-dirs. Added in 7.75.0. --crlf (FTP SMTP) Convert line feeds to carriage return plus line feeds in upload. Useful for MVS (OS/390). (SMTP added in 7.40.0) Providing --crlf multiple times has no extra effect. Disable it again with --no-crlf. Example: curl --crlf -T file ftp://example.com/ See also -B, --use-ascii. --crlfile <file> (TLS) Provide a file using PEM format with a Certificate Revocation List that may specify peer certificates that are to be considered revoked. If --crlfile is provided several times, the last set value is used. Example: curl --crlfile rejects.txt https://example.com See also --cacert and --capath. --curves <algorithm list> (TLS) Tells curl to request specific curves to use during SSL session establishment according to RFC 8422, 5.1. Multiple algorithms can be provided by separating them with ":" (e.g. "X25519:P-521"). The parameter is available identically in the "openssl s_client/s_server" utilities. --curves allows a OpenSSL powered curl to make SSL-connections with exactly the (EC) curve requested by the client, avoiding nontransparent client/server negotiations. If this option is set, the default curves list built into OpenSSL are ignored. If --curves is provided several times, the last set value is used. Example: curl --curves X25519 https://example.com See also --ciphers. Added in 7.73.0. -d, --data <data> (HTTP MQTT) Sends the specified data in a POST request to the HTTP server, in the same way that a browser does when a user has filled in an HTML form and presses the submit button. This makes curl pass the data to the server using the content-type application/x-www-form-urlencoded. Compare to -F, --form. --data-raw is almost the same but does not have a special interpretation of the @ character. To post data purely binary, you should instead use the --data-binary option. To URL-encode the value of a form field you may use --data-urlencode. If any of these options is used more than once on the same command line, the data pieces specified are merged with a separating &-symbol. Thus, using '-d name=daniel -d skill=lousy' would generate a post chunk that looks like 'name=daniel&skill=lousy'. If you start the data with the letter @, the rest should be a file name to read the data from, or - if you want curl to read the data from stdin. Posting data from a file named 'foobar' would thus be done with -d, --data @foobar. When -d, --data is told to read from a file like that, carriage returns and newlines are stripped out. If you do not want the @ character to have a special interpretation use --data-raw instead. The data for this option is passed on to the server exactly as provided on the command line. curl does not convert, change or improve it. It is up to the user to provide the data in the correct form. -d, --data can be used several times in a command line Examples: curl -d "name=curl" https://example.com curl -d "name=curl" -d "tool=cmdline" https://example.com curl -d @filename https://example.com See also --data-binary, --data-urlencode and --data-raw. This option is mutually exclusive to -F, --form and -I, --head and -T, --upload-file. --data-ascii <data> (HTTP) This is just an alias for -d, --data. --data-ascii can be used several times in a command line Example: curl --data-ascii @file https://example.com See also --data-binary, --data-raw and --data-urlencode. --data-binary <data> (HTTP) This posts data exactly as specified with no extra processing whatsoever. If you start the data with the letter @, the rest should be a filename. Data is posted in a similar manner as -d, --data does, except that newlines and carriage returns are preserved and conversions are never done. Like -d, --data the default content-type sent to the server is application/x-www-form-urlencoded. If you want the data to be treated as arbitrary binary data by the server then set the content-type to octet-stream: -H "Content-Type: application/octet-stream". If this option is used several times, the ones following the first append data as described in -d, --data. --data-binary can be used several times in a command line Example: curl --data-binary @filename https://example.com See also --data-ascii. --data-raw <data> (HTTP) This posts data similarly to -d, --data but without the special interpretation of the @ character. --data-raw can be used several times in a command line Examples: curl --data-raw "hello" https://example.com curl --data-raw "@at@at@" https://example.com See also -d, --data. --data-urlencode <data> (HTTP) This posts data, similar to the other -d, --data options with the exception that this performs URL-encoding. To be CGI-compliant, the <data> part should begin with a name followed by a separator and a content specification. The <data> part can be passed to curl using one of the following syntaxes: content This makes curl URL-encode the content and pass that on. Just be careful so that the content does not contain any = or @ symbols, as that makes the syntax match one of the other cases below! =content This makes curl URL-encode the content and pass that on. The preceding = symbol is not included in the data. name=content This makes curl URL-encode the content part and pass that on. Note that the name part is expected to be URL-encoded already. @filename This makes curl load data from the given file (including any newlines), URL-encode that data and pass it on in the POST. name@filename This makes curl load data from the given file (including any newlines), URL-encode that data and pass it on in the POST. The name part gets an equal sign appended, resulting in name=urlencoded-file-content. Note that the name is expected to be URL-encoded already. --data-urlencode can be used several times in a command line Examples: curl --data-urlencode name=val https://example.com curl --data-urlencode =encodethis https://example.com curl --data-urlencode name@file https://example.com curl --data-urlencode @fileonly https://example.com See also -d, --data and --data-raw. --delegation <LEVEL> (GSS/kerberos) Set LEVEL to tell the server what it is allowed to delegate when it comes to user credentials. none Do not allow any delegation. policy Delegates if and only if the OK-AS-DELEGATE flag is set in the Kerberos service ticket, which is a matter of realm policy. always Unconditionally allow the server to delegate. If --delegation is provided several times, the last set value is used. Example: curl --delegation "none" https://example.com See also -k, --insecure and --ssl. --digest (HTTP) Enables HTTP Digest authentication. This is an authentication scheme that prevents the password from being sent over the wire in clear text. Use this in combination with the normal -u, --user option to set user name and password. Providing --digest multiple times has no extra effect. Disable it again with --no-digest. Example: curl -u name:password --digest https://example.com See also -u, --user, --proxy-digest and --anyauth. This option is mutually exclusive to --basic and --ntlm and --negotiate. -q, --disable If used as the first parameter on the command line, the curlrc config file is not read or used. See the -K, --config for details on the default config file search path. Prior to 7.50.0 curl supported the short option name q but not the long option name disable. Providing -q, --disable multiple times has no extra effect. Disable it again with --no-disable. Example: curl -q https://example.com See also -K, --config. --disable-eprt (FTP) Tell curl to disable the use of the EPRT and LPRT commands when doing active FTP transfers. Curl normally first attempts to use EPRT before using PORT, but with this option, it uses PORT right away. EPRT is an extension to the original FTP protocol, and does not work on all servers, but enables more functionality in a better way than the traditional PORT command. --eprt can be used to explicitly enable EPRT again and --no-eprt is an alias for --disable-eprt. If the server is accessed using IPv6, this option has no effect as EPRT is necessary then. Disabling EPRT only changes the active behavior. If you want to switch to passive mode you need to not use -P, --ftp-port or force it with --ftp-pasv. Providing --disable-eprt multiple times has no extra effect. Disable it again with --no-disable-eprt. Example: curl --disable-eprt ftp://example.com/ See also --disable-epsv and -P, --ftp-port. --disable-epsv (FTP) Tell curl to disable the use of the EPSV command when doing passive FTP transfers. Curl normally first attempts to use EPSV before PASV, but with this option, it does not try EPSV. --epsv can be used to explicitly enable EPSV again and --no-epsv is an alias for --disable-epsv. If the server is an IPv6 host, this option has no effect as EPSV is necessary then. Disabling EPSV only changes the passive behavior. If you want to switch to active mode you need to use -P, --ftp-port. Providing --disable-epsv multiple times has no extra effect. Disable it again with --no-disable-epsv. Example: curl --disable-epsv ftp://example.com/ See also --disable-eprt and -P, --ftp-port. --disallow-username-in-url This tells curl to exit if passed a URL containing a username. This is probably most useful when the URL is being provided at runtime or similar. Providing --disallow-username-in-url multiple times has no extra effect. Disable it again with --no-disallow-username-in-url. Example: curl --disallow-username-in-url https://example.com See also --proto. Added in 7.61.0. --dns-interface <interface> (DNS) Tell curl to send outgoing DNS requests through <interface>. This option is a counterpart to --interface (which does not affect DNS). The supplied string must be an interface name (not an address). If --dns-interface is provided several times, the last set value is used. Example: curl --dns-interface eth0 https://example.com See also --dns-ipv4-addr and --dns-ipv6-addr. --dns-interface requires that the underlying libcurl was built to support c-ares. --dns-ipv4-addr <address> (DNS) Tell curl to bind to a specific IP address when making IPv4 DNS requests, so that the DNS requests originate from this address. The argument should be a single IPv4 address. If --dns-ipv4-addr is provided several times, the last set value is used. Example: curl --dns-ipv4-addr 10.1.2.3 https://example.com See also --dns-interface and --dns-ipv6-addr. --dns-ipv4-addr requires that the underlying libcurl was built to support c-ares. --dns-ipv6-addr <address> (DNS) Tell curl to bind to a specific IP address when making IPv6 DNS requests, so that the DNS requests originate from this address. The argument should be a single IPv6 address. If --dns-ipv6-addr is provided several times, the last set value is used. Example: curl --dns-ipv6-addr 2a04:4e42::561 https://example.com See also --dns-interface and --dns-ipv4-addr. --dns-ipv6-addr requires that the underlying libcurl was built to support c-ares. --dns-servers <addresses> (DNS) Set the list of DNS servers to be used instead of the system default. The list of IP addresses should be separated with commas. Port numbers may also optionally be given as :<port-number> after each IP address. If --dns-servers is provided several times, the last set value is used. Example: curl --dns-servers 192.168.0.1,192.168.0.2 https://example.com See also --dns-interface and --dns-ipv4-addr. --dns-servers requires that the underlying libcurl was built to support c-ares. --doh-cert-status Same as --cert-status but used for DoH (DNS-over-HTTPS). Providing --doh-cert-status multiple times has no extra effect. Disable it again with --no-doh-cert-status. Example: curl --doh-cert-status --doh-url https://doh.example https://example.com See also --doh-insecure. Added in 7.76.0. --doh-insecure Same as -k, --insecure but used for DoH (DNS-over-HTTPS). Providing --doh-insecure multiple times has no extra effect. Disable it again with --no-doh-insecure. Example: curl --doh-insecure --doh-url https://doh.example https://example.com See also --doh-url. Added in 7.76.0. --doh-url <URL> Specifies which DNS-over-HTTPS (DoH) server to use to resolve hostnames, instead of using the default name resolver mechanism. The URL must be HTTPS. Some SSL options that you set for your transfer also applies to DoH since the name lookups take place over SSL. However, the certificate verification settings are not inherited but are controlled separately via --doh-insecure and --doh-cert-status. This option is unset if an empty string "" is used as the URL. (Added in 7.85.0) If --doh-url is provided several times, the last set value is used. Example: curl --doh-url https://doh.example https://example.com See also --doh-insecure. Added in 7.62.0. -D, --dump-header <filename> (HTTP FTP) Write the received protocol headers to the specified file. If no headers are received, the use of this option creates an empty file. When used in FTP, the FTP server response lines are considered being "headers" and thus are saved there. Having multiple transfers in one set of operations (i.e. the URLs in one -:, --next clause), appends them to the same file, separated by a blank line. If -D, --dump-header is provided several times, the last set value is used. Example: curl --dump-header store.txt https://example.com See also -o, --output. --egd-file <file> (TLS) Deprecated option (added in 7.84.0). Prior to that it only had an effect on curl if built to use old versions of OpenSSL. Specify the path name to the Entropy Gathering Daemon socket. The socket is used to seed the random engine for SSL connections. If --egd-file is provided several times, the last set value is used. Example: curl --egd-file /random/here https://example.com See also --random-file. --engine <name> (TLS) Select the OpenSSL crypto engine to use for cipher operations. Use --engine list to print a list of build-time supported engines. Note that not all (and possibly none) of the engines may be available at runtime. If --engine is provided several times, the last set value is used. Example: curl --engine flavor https://example.com See also --ciphers and --curves. --etag-compare <file> (HTTP) This option makes a conditional HTTP request for the specific ETag read from the given file by sending a custom If-None-Match header using the stored ETag. For correct results, make sure that the specified file contains only a single line with the desired ETag. An empty file is parsed as an empty ETag. Use the option --etag-save to first save the ETag from a response, and then use this option to compare against the saved ETag in a subsequent request. If --etag-compare is provided several times, the last set value is used. Example: curl --etag-compare etag.txt https://example.com See also --etag-save and -z, --time-cond. Added in 7.68.0. --etag-save <file> (HTTP) This option saves an HTTP ETag to the specified file. An ETag is a caching related header, usually returned in a response. If no ETag is sent by the server, an empty file is created. If --etag-save is provided several times, the last set value is used. Example: curl --etag-save storetag.txt https://example.com See also --etag-compare. Added in 7.68.0. --expect100-timeout <seconds> (HTTP) Maximum time in seconds that you allow curl to wait for a 100-continue response when curl emits an Expects: 100-continue header in its request. By default curl waits one second. This option accepts decimal values! When curl stops waiting, it continues as if the response has been received. The decimal value needs to provided using a dot (.) as decimal separator - not the local version even if it might be using another separator. If --expect100-timeout is provided several times, the last set value is used. Example: curl --expect100-timeout 2.5 -T file https://example.com See also --connect-timeout. -f, --fail (HTTP) Fail fast with no output at all on server errors. This is useful to enable scripts and users to better deal with failed attempts. In normal cases when an HTTP server fails to deliver a document, it returns an HTML document stating so (which often also describes why and more). This flag prevents curl from outputting that and return error 22. This method is not fail-safe and there are occasions where non-successful response codes slip through, especially when authentication is involved (response codes 401 and 407). Providing -f, --fail multiple times has no extra effect. Disable it again with --no-fail. Example: curl --fail https://example.com See also --fail-with-body and --fail-early. This option is mutually exclusive to --fail-with-body. --fail-early Fail and exit on the first detected transfer error. When curl is used to do multiple transfers on the command line, it attempts to operate on each given URL, one by one. By default, it ignores errors if there are more URLs given and the last URL's success determines the error code curl returns. So early failures are "hidden" by subsequent successful transfers. Using this option, curl instead returns an error on the first transfer that fails, independent of the amount of URLs that are given on the command line. This way, no transfer failures go undetected by scripts and similar. This option does not imply -f, --fail, which causes transfers to fail due to the server's HTTP status code. You can combine the two options, however note -f, --fail is not global and is therefore contained by -:, --next. This option is global and does not need to be specified for each use of --next. Providing --fail-early multiple times has no extra effect. Disable it again with --no-fail-early. Example: curl --fail-early https://example.com https://two.example See also -f, --fail and --fail-with-body. Added in 7.52.0. --fail-with-body (HTTP) Return an error on server errors where the HTTP response code is 400 or greater). In normal cases when an HTTP server fails to deliver a document, it returns an HTML document stating so (which often also describes why and more). This flag allows curl to output and save that content but also to return error 22. This is an alternative option to -f, --fail which makes curl fail for the same circumstances but without saving the content. Providing --fail-with-body multiple times has no extra effect. Disable it again with --no-fail-with-body. Example: curl --fail-with-body https://example.com See also -f, --fail and --fail-early. This option is mutually exclusive to -f, --fail. Added in 7.76.0. --false-start (TLS) Tells curl to use false start during the TLS handshake. False start is a mode where a TLS client starts sending application data before verifying the server's Finished message, thus saving a round trip when performing a full handshake. This is currently only implemented in the Secure Transport (on iOS 7.0 or later, or OS X 10.9 or later) backend. Providing --false-start multiple times has no extra effect. Disable it again with --no-false-start. Example: curl --false-start https://example.com See also --tcp-fastopen. -F, --form <name=content> (HTTP SMTP IMAP) For HTTP protocol family, this lets curl emulate a filled-in form in which a user has pressed the submit button. This causes curl to POST data using the Content-Type multipart/form-data according to RFC 2388. For SMTP and IMAP protocols, this is the means to compose a multipart mail message to transmit. This enables uploading of binary files etc. To force the 'content' part to be a file, prefix the file name with an @ sign. To just get the content part from a file, prefix the file name with the symbol <. The difference between @ and < is then that @ makes a file get attached in the post as a file upload, while the < makes a text field and just get the contents for that text field from a file. Tell curl to read content from stdin instead of a file by using - as filename. This goes for both @ and < constructs. When stdin is used, the contents is buffered in memory first by curl to determine its size and allow a possible resend. Defining a part's data from a named non-regular file (such as a named pipe or similar) is not subject to buffering and is instead read at transmission time; since the full size is unknown before the transfer starts, such data is sent as chunks by HTTP and rejected by IMAP. Example: send an image to an HTTP server, where 'profile' is the name of the form-field to which the file portrait.jpg is the input: curl -F profile=@portrait.jpg https://example.com/upload.cgi Example: send your name and shoe size in two text fields to the server: curl -F name=John -F shoesize=11 https://example.com/ Example: send your essay in a text field to the server. Send it as a plain text field, but get the contents for it from a local file: curl -F "story=<hugefile.txt" https://example.com/ You can also tell curl what Content-Type to use by using 'type=', in a manner similar to: curl -F "web=@index.html;type=text/html" example.com or curl -F "name=daniel;type=text/foo" example.com You can also explicitly change the name field of a file upload part by setting filename=, like this: curl -F "file=@localfile;filename=nameinpost" example.com If filename/path contains ',' or ';', it must be quoted by double-quotes like: curl -F "file=@\"local,file\";filename=\"name;in;post\"" example.com or curl -F 'file=@"local,file";filename="name;in;post"' example.com Note that if a filename/path is quoted by double-quotes, any double-quote or backslash within the filename must be escaped by backslash. Quoting must also be applied to non-file data if it contains semicolons, leading/trailing spaces or leading double quotes: curl -F 'colors="red; green; blue";type=text/x-myapp' example.com You can add custom headers to the field by setting headers=, like curl -F "submit=OK;headers=\"X-submit-type: OK\"" example.com or curl -F "submit=OK;headers=@headerfile" example.com The headers= keyword may appear more that once and above notes about quoting apply. When headers are read from a file, Empty lines and lines starting with '#' are comments and ignored; each header can be folded by splitting between two words and starting the continuation line with a space; embedded carriage-returns and trailing spaces are stripped. Here is an example of a header file contents: # This file contain two headers. X-header-1: this is a header # The following header is folded. X-header-2: this is another header To support sending multipart mail messages, the syntax is extended as follows: - name can be omitted: the equal sign is the first character of the argument, - if data starts with '(', this signals to start a new multipart: it can be followed by a content type specification. - a multipart can be terminated with a '=)' argument. Example: the following command sends an SMTP mime email consisting in an inline part in two alternative formats: plain text and HTML. It attaches a text file: curl -F '=(;type=multipart/alternative' \ -F '=plain text message' \ -F '= <body>HTML message</body>;type=text/html' \ -F '=)' -F '=@textfile.txt' ... smtp://example.com Data can be encoded for transfer using encoder=. Available encodings are binary and 8bit that do nothing else than adding the corresponding Content-Transfer-Encoding header, 7bit that only rejects 8-bit characters with a transfer error, quoted-printable and base64 that encodes data according to the corresponding schemes, limiting lines length to 76 characters. Example: send multipart mail with a quoted-printable text message and a base64 attached file: curl -F '=text message;encoder=quoted-printable' \ -F '=@localfile;encoder=base64' ... smtp://example.com See further examples and details in the MANUAL. -F, --form can be used several times in a command line Example: curl --form "name=curl" --form "file=@loadthis" https://example.com See also -d, --data, --form-string and --form-escape. This option is mutually exclusive to -d, --data and -I, --head and -T, --upload-file. --form-escape (HTTP) Tells curl to pass on names of multipart form fields and files using backslash-escaping instead of percent-encoding. If --form-escape is provided several times, the last set value is used. Example: curl --form-escape -F 'field\name=curl' -F 'file=@load"this' https://example.com See also -F, --form. Added in 7.81.0. --form-string <name=string> (HTTP SMTP IMAP) Similar to -F, --form except that the value string for the named parameter is used literally. Leading '@' and '<' characters, and the ';type=' string in the value have no special meaning. Use this in preference to -F, --form if there is any possibility that the string value may accidentally trigger the '@' or '<' features of -F, --form. --form-string can be used several times in a command line Example: curl --form-string "data" https://example.com See also -F, --form. --ftp-account <data> (FTP) When an FTP server asks for "account data" after user name and password has been provided, this data is sent off using the ACCT command. If --ftp-account is provided several times, the last set value is used. Example: curl --ftp-account "mr.robot" ftp://example.com/ See also -u, --user. --ftp-alternative-to-user <command> (FTP) If authenticating with the USER and PASS commands fails, send this command. When connecting to Tumbleweed's Secure Transport server over FTPS using a client certificate, using "SITE AUTH" tells the server to retrieve the username from the certificate. If --ftp-alternative-to-user is provided several times, the last set value is used. Example: curl --ftp-alternative-to-user "U53r" ftp://example.com See also --ftp-account and -u, --user. --ftp-create-dirs (FTP SFTP) When an FTP or SFTP URL/operation uses a path that does not currently exist on the server, the standard behavior of curl is to fail. Using this option, curl instead attempts to create missing directories. Providing --ftp-create-dirs multiple times has no extra effect. Disable it again with --no-ftp-create-dirs. Example: curl --ftp-create-dirs -T file ftp://example.com/remote/path/file See also --create-dirs. --ftp-method <method> (FTP) Control what method curl should use to reach a file on an FTP(S) server. The method argument should be one of the following alternatives: multicwd curl does a single CWD operation for each path part in the given URL. For deep hierarchies this means many commands. This is how RFC 1738 says it should be done. This is the default but the slowest behavior. nocwd curl does no CWD at all. curl does SIZE, RETR, STOR etc and give a full path to the server for all these commands. This is the fastest behavior. singlecwd curl does one CWD with the full target directory and then operates on the file "normally" (like in the multicwd case). This is somewhat more standards compliant than 'nocwd' but without the full penalty of 'multicwd'. If --ftp-method is provided several times, the last set value is used. Examples: curl --ftp-method multicwd ftp://example.com/dir1/dir2/file curl --ftp-method nocwd ftp://example.com/dir1/dir2/file curl --ftp-method singlecwd ftp://example.com/dir1/dir2/file See also -l, --list-only. --ftp-pasv (FTP) Use passive mode for the data connection. Passive is the internal default behavior, but using this option can be used to override a previous -P, --ftp-port option. Reversing an enforced passive really is not doable but you must then instead enforce the correct -P, --ftp-port again. Passive mode means that curl tries the EPSV command first and then PASV, unless --disable-epsv is used. Providing --ftp-pasv multiple times has no extra effect. Disable it again with --no-ftp-pasv. Example: curl --ftp-pasv ftp://example.com/ See also --disable-epsv. -P, --ftp-port <address> (FTP) Reverses the default initiator/listener roles when connecting with FTP. This option makes curl use active mode. curl then tells the server to connect back to the client's specified address and port, while passive mode asks the server to setup an IP address and port for it to connect to. <address> should be one of: interface e.g. "eth0" to specify which interface's IP address you want to use (Unix only) IP address e.g. "192.168.10.1" to specify the exact IP address host name e.g. "my.host.domain" to specify the machine - make curl pick the same IP address that is already used for the control connection Disable the use of PORT with --ftp-pasv. Disable the attempt to use the EPRT command instead of PORT by using --disable-eprt. EPRT is really PORT++. You can also append ":[start]-[end]" to the right of the address, to tell curl what TCP port range to use. That means you specify a port range, from a lower to a higher number. A single number works as well, but do note that it increases the risk of failure since the port may not be available. If -P, --ftp-port is provided several times, the last set value is used. Examples: curl -P - ftp:/example.com curl -P eth0 ftp:/example.com curl -P 192.168.0.2 ftp:/example.com See also --ftp-pasv and --disable-eprt. --ftp-pret (FTP) Tell curl to send a PRET command before PASV (and EPSV). Certain FTP servers, mainly drftpd, require this non-standard command for directory listings as well as up and downloads in PASV mode. Providing --ftp-pret multiple times has no extra effect. Disable it again with --no-ftp-pret. Example: curl --ftp-pret ftp://example.com/ See also -P, --ftp-port and --ftp-pasv. --ftp-skip-pasv-ip (FTP) Tell curl to not use the IP address the server suggests in its response to curl's PASV command when curl connects the data connection. Instead curl reuses the same IP address it already uses for the control connection. This option is enabled by default (added in 7.74.0). This option has no effect if PORT, EPRT or EPSV is used instead of PASV. Providing --ftp-skip-pasv-ip multiple times has no extra effect. Disable it again with --no-ftp-skip-pasv-ip. Example: curl --ftp-skip-pasv-ip ftp://example.com/ See also --ftp-pasv. --ftp-ssl-ccc (FTP) Use CCC (Clear Command Channel) Shuts down the SSL/TLS layer after authenticating. The rest of the control channel communication is be unencrypted. This allows NAT routers to follow the FTP transaction. The default mode is passive. Providing --ftp-ssl-ccc multiple times has no extra effect. Disable it again with --no-ftp-ssl-ccc. Example: curl --ftp-ssl-ccc ftps://example.com/ See also --ssl and --ftp-ssl-ccc-mode. --ftp-ssl-ccc-mode <active/passive> (FTP) Sets the CCC mode. The passive mode does not initiate the shutdown, but instead waits for the server to do it, and does not reply to the shutdown from the server. The active mode initiates the shutdown and waits for a reply from the server. Providing --ftp-ssl-ccc-mode multiple times has no extra effect. Disable it again with --no-ftp-ssl-ccc-mode. Example: curl --ftp-ssl-ccc-mode active --ftp-ssl-ccc ftps://example.com/ See also --ftp-ssl-ccc. --ftp-ssl-control (FTP) Require SSL/TLS for the FTP login, clear for transfer. Allows secure authentication, but non-encrypted data transfers for efficiency. Fails the transfer if the server does not support SSL/TLS. Providing --ftp-ssl-control multiple times has no extra effect. Disable it again with --no-ftp-ssl-control. Example: curl --ftp-ssl-control ftp://example.com See also --ssl. -G, --get (HTTP) When used, this option makes all data specified with -d, --data, --data-binary or --data-urlencode to be used in an HTTP GET request instead of the POST request that otherwise would be used. The data is appended to the URL with a '?' separator. If used in combination with -I, --head, the POST data is instead appended to the URL with a HEAD request. Providing -G, --get multiple times has no extra effect. Disable it again with --no-get. Examples: curl --get https://example.com curl --get -d "tool=curl" -d "age=old" https://example.com curl --get -I -d "tool=curl" https://example.com See also -d, --data and -X, --request. -g, --globoff This option switches off the "URL globbing parser". When you set this option, you can specify URLs that contain the letters {}[] without having curl itself interpret them. Note that these letters are not normal legal URL contents but they should be encoded according to the URI standard. Providing -g, --globoff multiple times has no extra effect. Disable it again with --no-globoff. Example: curl -g "https://example.com/{[]}}}}" See also -K, --config and -q, --disable. --happy-eyeballs-timeout-ms <milliseconds> Happy Eyeballs is an algorithm that attempts to connect to both IPv4 and IPv6 addresses for dual-stack hosts, giving IPv6 a head-start of the specified number of milliseconds. If the IPv6 address cannot be connected to within that time, then a connection attempt is made to the IPv4 address in parallel. The first connection to be established is the one that is used. The range of suggested useful values is limited. Happy Eyeballs RFC 6555 says "It is RECOMMENDED that connection attempts be paced 150-250 ms apart to balance human factors against network load." libcurl currently defaults to 200 ms. Firefox and Chrome currently default to 300 ms. If --happy-eyeballs-timeout-ms is provided several times, the last set value is used. Example: curl --happy-eyeballs-timeout-ms 500 https://example.com See also -m, --max-time and --connect-timeout. Added in 7.59.0. --haproxy-clientip (HTTP) Sets a client IP in HAProxy PROXY protocol v1 header at the beginning of the connection. For valid requests, IPv4 addresses must be indicated as a series of exactly 4 integers in the range [0..255] inclusive written in decimal representation separated by exactly one dot between each other. Heading zeroes are not permitted in front of numbers in order to avoid any possible confusion with octal numbers. IPv6 addresses must be indicated as series of 4 hexadecimal digits (upper or lower case) delimited by colons between each other, with the acceptance of one double colon sequence to replace the largest acceptable range of consecutive zeroes. The total number of decoded bits must exactly be 128. Otherwise, any string can be accepted for the client IP and get sent. It replaces --haproxy-protocol if used, it is not necessary to specify both flags. This option is primarily useful when sending test requests to verify a service is working as intended. If --haproxy-clientip is provided several times, the last set value is used. Example: curl --haproxy-clientip $IP See also -x, --proxy. Added in 8.2.0. --haproxy-protocol (HTTP) Send a HAProxy PROXY protocol v1 header at the beginning of the connection. This is used by some load balancers and reverse proxies to indicate the client's true IP address and port. This option is primarily useful when sending test requests to a service that expects this header. Providing --haproxy-protocol multiple times has no extra effect. Disable it again with --no-haproxy-protocol. Example: curl --haproxy-protocol https://example.com See also -x, --proxy. Added in 7.60.0. -I, --head (HTTP FTP FILE) Fetch the headers only! HTTP-servers feature the command HEAD which this uses to get nothing but the header of a document. When used on an FTP or FILE file, curl displays the file size and last modification time only. Providing -I, --head multiple times has no extra effect. Disable it again with --no-head. Example: curl -I https://example.com See also -G, --get, -v, --verbose and --trace-ascii. -H, --header <header/@file> (HTTP IMAP SMTP) Extra header to include in information sent. When used within an HTTP request, it is added to the regular request headers. For an IMAP or SMTP MIME uploaded mail built with -F, --form options, it is prepended to the resulting MIME document, effectively including it at the mail global level. It does not affect raw uploaded mails (Added in 7.56.0). You may specify any number of extra headers. Note that if you should add a custom header that has the same name as one of the internal ones curl would use, your externally set header is used instead of the internal one. This allows you to make even trickier stuff than curl would normally do. You should not replace internally set headers without knowing perfectly well what you are doing. Remove an internal header by giving a replacement without content on the right side of the colon, as in: -H "Host:". If you send the custom header with no-value then its header must be terminated with a semicolon, such as -H "X-Custom-Header;" to send "X-Custom-Header:". curl makes sure that each header you add/replace is sent with the proper end-of-line marker, you should thus not add that as a part of the header content: do not add newlines or carriage returns, they only mess things up for you. curl passes on the verbatim string you give it without any filter or other safe guards. That includes white space and control characters. This option can take an argument in @filename style, which then adds a header for each line in the input file. Using @- makes curl read the header file from stdin. Added in 7.55.0. Please note that most anti-spam utilities check the presence and value of several MIME mail headers: these are "From:", "To:", "Date:" and "Subject:" among others and should be added with this option. You need --proxy-header to send custom headers intended for an HTTP proxy. Added in 7.37.0. Passing on a "Transfer-Encoding: chunked" header when doing an HTTP request with a request body, makes curl send the data using chunked encoding. WARNING: headers set with this option are set in all HTTP requests - even after redirects are followed, like when told with -L, --location. This can lead to the header being sent to other hosts than the original host, so sensitive headers should be used with caution combined with following redirects. -H, --header can be used several times in a command line Examples: curl -H "X-First-Name: Joe" https://example.com curl -H "User-Agent: yes-please/2000" https://example.com curl -H "Host:" https://example.com curl -H @headers.txt https://example.com See also -A, --user-agent and -e, --referer. -h, --help <category> Usage help. This lists all curl command line options within the given category. If no argument is provided, curl displays only the most important command line arguments. For category all, curl displays help for all options. If category is specified, curl displays all available help categories. Example: curl --help all See also -v, --verbose. --hostpubmd5 <md5> (SFTP SCP) Pass a string containing 32 hexadecimal digits. The string should be the 128 bit MD5 checksum of the remote host's public key, curl refuses the connection with the host unless the md5sums match. If --hostpubmd5 is provided several times, the last set value is used. Example: curl --hostpubmd5 e5c1c49020640a5ab0f2034854c321a8 sftp://example.com/ See also --hostpubsha256. --hostpubsha256 <sha256> (SFTP SCP) Pass a string containing a Base64-encoded SHA256 hash of the remote host's public key. Curl refuses the connection with the host unless the hashes match. This feature requires libcurl to be built with libssh2 and does not work with other SSH backends. If --hostpubsha256 is provided several times, the last set value is used. Example: curl --hostpubsha256 NDVkMTQxMGQ1ODdmMjQ3MjczYjAyOTY5MmRkMjVmNDQ= sftp://example.com/ See also --hostpubmd5. Added in 7.80.0. --hsts <file name> (HTTPS) This option enables HSTS for the transfer. If the file name points to an existing HSTS cache file, that is used. After a completed transfer, the cache is saved to the file name again if it has been modified. If curl is told to use HTTP:// for a transfer involving a host name that exists in the HSTS cache, it upgrades the transfer to use HTTPS. Each HSTS cache entry has an individual life time after which the upgrade is no longer performed. Specify a "" file name (zero length) to avoid loading/saving and make curl just handle HSTS in memory. If this option is used several times, curl loads contents from all the files but the last one is used for saving. --hsts can be used several times in a command line Example: curl --hsts cache.txt https://example.com See also --proto. Added in 7.74.0. --http0.9 (HTTP) Tells curl to be fine with HTTP version 0.9 response. HTTP/0.9 is a response without headers and therefore you can also connect with this to non-HTTP servers and still get a response since curl simply transparently downgrades - if allowed. HTTP/0.9 is disabled by default (added in 7.66.0) Providing --http0.9 multiple times has no extra effect. Disable it again with --no-http0.9. Example: curl --http0.9 https://example.com See also --http1.1, --http2 and --http3. Added in 7.64.0. -0, --http1.0 (HTTP) Tells curl to use HTTP version 1.0 instead of using its internally preferred HTTP version. Providing -0, --http1.0 multiple times has no extra effect. Example: curl --http1.0 https://example.com See also --http0.9 and --http1.1. This option is mutually exclusive to --http1.1 and --http2 and --http2-prior-knowledge and --http3. --http1.1 (HTTP) Tells curl to use HTTP version 1.1. Providing --http1.1 multiple times has no extra effect. Example: curl --http1.1 https://example.com See also -0, --http1.0 and --http0.9. This option is mutually exclusive to -0, --http1.0 and --http2 and --http2-prior-knowledge and --http3. --http2 (HTTP) Tells curl to use HTTP version 2. For HTTPS, this means curl negotiates HTTP/2 in the TLS handshake. curl does this by default. For HTTP, this means curl attempts to upgrade the request to HTTP/2 using the Upgrade: request header. When curl uses HTTP/2 over HTTPS, it does not itself insist on TLS 1.2 or higher even though that is required by the specification. A user can add this version requirement with --tlsv1.2. Providing --http2 multiple times has no extra effect. Example: curl --http2 https://example.com See also --http1.1, --http3 and --no-alpn. --http2 requires that the underlying libcurl was built to support HTTP/2. This option is mutually exclusive to --http1.1 and -0, --http1.0 and --http2-prior-knowledge and --http3. --http2-prior-knowledge (HTTP) Tells curl to issue its non-TLS HTTP requests using HTTP/2 without HTTP/1.1 Upgrade. It requires prior knowledge that the server supports HTTP/2 straight away. HTTPS requests still do HTTP/2 the standard way with negotiated protocol version in the TLS handshake. Providing --http2-prior-knowledge multiple times has no extra effect. Disable it again with --no-http2-prior-knowledge. Example: curl --http2-prior-knowledge https://example.com See also --http2 and --http3. --http2-prior-knowledge requires that the underlying libcurl was built to support HTTP/2. This option is mutually exclusive to --http1.1 and -0, --http1.0 and --http2 and --http3. --http3 (HTTP) Tells curl to try HTTP/3 to the host in the URL, but fallback to earlier HTTP versions if the HTTP/3 connection establishment fails. HTTP/3 is only available for HTTPS and not for HTTP URLs. This option allows a user to avoid using the Alt-Svc method of upgrading to HTTP/3 when you know that the target speaks HTTP/3 on the given host and port. When asked to use HTTP/3, curl issues a separate attempt to use older HTTP versions with a slight delay, so if the HTTP/3 transfer fails or is slow, curl still tries to proceed with an older HTTP version. Use --http3-only for similar functionality without a fallback. Providing --http3 multiple times has no extra effect. Example: curl --http3 https://example.com See also --http1.1 and --http2. --http3 requires that the underlying libcurl was built to support HTTP/3. This option is mutually exclusive to --http1.1 and -0, --http1.0 and --http2 and --http2-prior-knowledge and --http3-only. Added in 7.66.0. --http3-only (HTTP) Instructs curl to use HTTP/3 to the host in the URL, with no fallback to earlier HTTP versions. HTTP/3 can only be used for HTTPS and not for HTTP URLs. For HTTP, this option triggers an error. This option allows a user to avoid using the Alt-Svc method of upgrading to HTTP/3 when you know that the target speaks HTTP/3 on the given host and port. This option makes curl fail if a QUIC connection cannot be established, it does not attempt any other HTTP versions on its own. Use --http3 for similar functionality with a fallback. Providing --http3-only multiple times has no extra effect. Example: curl --http3-only https://example.com See also --http1.1, --http2 and --http3. --http3-only requires that the underlying libcurl was built to support HTTP/3. This option is mutually exclusive to --http1.1 and -0, --http1.0 and --http2 and --http2-prior-knowledge and --http3. Added in 7.88.0. --ignore-content-length (FTP HTTP) For HTTP, Ignore the Content-Length header. This is particularly useful for servers running Apache 1.x, which reports incorrect Content-Length for files larger than 2 gigabytes. For FTP, this makes curl skip the SIZE command to figure out the size before downloading a file. This option does not work for HTTP if libcurl was built to use hyper. Providing --ignore-content-length multiple times has no extra effect. Disable it again with --no-ignore-content-length. Example: curl --ignore-content-length https://example.com See also --ftp-skip-pasv-ip. -i, --include (HTTP FTP) Include response headers in the output. HTTP response headers can include things like server name, cookies, date of the document, HTTP version and more... With non-HTTP protocols, the "headers" are other server communication. To view the request headers, consider the -v, --verbose option. Prior to 7.75.0 curl did not print the headers if -f, --fail was used in combination with this option and there was error reported by server. Providing -i, --include multiple times has no extra effect. Disable it again with --no-include. Example: curl -i https://example.com See also -v, --verbose. -k, --insecure (TLS SFTP SCP) By default, every secure connection curl makes is verified to be secure before the transfer takes place. This option makes curl skip the verification step and proceed without checking. When this option is not used for protocols using TLS, curl verifies the server's TLS certificate before it continues: that the certificate contains the right name which matches the host name used in the URL and that the certificate has been signed by a CA certificate present in the cert store. See this online resource for further details: https://curl.se/docs/sslcerts.html For SFTP and SCP, this option makes curl skip the known_hosts verification. known_hosts is a file normally stored in the user's home directory in the ".ssh" subdirectory, which contains host names and their public keys. WARNING: using this option makes the transfer insecure. When curl uses secure protocols it trusts responses and allows for example HSTS and Alt-Svc information to be stored and used subsequently. Using -k, --insecure can make curl trust and use such information from malicious servers. Providing -k, --insecure multiple times has no extra effect. Disable it again with --no-insecure. Example: curl --insecure https://example.com See also --proxy-insecure, --cacert and --capath. --interface <name> Perform an operation using a specified interface. You can enter interface name, IP address or host name. An example could look like: curl --interface eth0:1 https://www.example.com/ On Linux it can be used to specify a VRF, but the binary needs to either have CAP_NET_RAW or to be run as root. More information about Linux VRF: https://www.kernel.org/doc/Documentation/networking/vrf.txt If --interface is provided several times, the last set value is used. Example: curl --interface eth0 https://example.com See also --dns-interface. --ipfs-gateway <URL> (IPFS) Specify which gateway to use for IPFS and IPNS URLs. Not specifying this will instead make curl check if the IPFS_GATEWAY environment variable is set, or if a ~/.ipfs/gateway file holding the gateway URL exists. If you run a local IPFS node, this gateway is by default available under http://localhost:8080. A full example URL would look like: curl --ipfs-gateway http://localhost:8080 ipfs://bafybeigagd5nmnn2iys2f3doro7ydrevyr2mzarwidgadawmamiteydbzi There are many public IPFS gateways. See for example: https://ipfs.github.io/public-gateway-checker/ WARNING: If you opt to go for a remote gateway you should be aware that you completely trust the gateway. This is fine in local gateways as you host it yourself. With remote gateways there could potentially be a malicious actor returning you data that does not match the request you made, inspect or even interfere with the request. You will not notice this when using curl. A mitigation could be to go for a "trustless" gateway. This means you locally verify that the data. Consult the docs page on trusted vs trustless: https://docs.ipfs.tech/reference/http/gateway/#trusted-vs-trustless If --ipfs-gateway is provided several times, the last set value is used. Example: curl --ipfs-gateway https://example.com ipfs:// See also -h, --help and -M, --manual. Added in 8.4.0. -4, --ipv4 This option tells curl to use IPv4 addresses only when resolving host names, and not for example try IPv6. Providing -4, --ipv4 multiple times has no extra effect. Example: curl --ipv4 https://example.com See also --http1.1 and --http2. This option is mutually exclusive to -6, --ipv6. -6, --ipv6 This option tells curl to use IPv6 addresses only when resolving host names, and not for example try IPv4. Providing -6, --ipv6 multiple times has no extra effect. Example: curl --ipv6 https://example.com See also --http1.1 and --http2. This option is mutually exclusive to -4, --ipv4. --json <data> (HTTP) Sends the specified JSON data in a POST request to the HTTP server. --json works as a shortcut for passing on these three options: --data [arg] --header "Content-Type: application/json" --header "Accept: application/json" There is no verification that the passed in data is actual JSON or that the syntax is correct. If you start the data with the letter @, the rest should be a file name to read the data from, or a single dash (-) if you want curl to read the data from stdin. Posting data from a file named 'foobar' would thus be done with --json @foobar and to instead read the data from stdin, use --json @-. If this option is used more than once on the same command line, the additional data pieces are concatenated to the previous before sending. The headers this option sets can be overridden with -H, --header as usual. --json can be used several times in a command line Examples: curl --json '{ "drink": "coffe" }' https://example.com curl --json '{ "drink":' --json ' "coffe" }' https://example.com curl --json @prepared https://example.com curl --json @- https://example.com < json.txt See also --data-binary and --data-raw. This option is mutually exclusive to -F, --form and -I, --head and -T, --upload-file. Added in 7.82.0. -j, --junk-session-cookies (HTTP) When curl is told to read cookies from a given file, this option makes it discard all "session cookies". This has the same effect as if a new session is started. Typical browsers discard session cookies when they are closed down. Providing -j, --junk-session-cookies multiple times has no extra effect. Disable it again with --no-junk-session-cookies. Example: curl --junk-session-cookies -b cookies.txt https://example.com See also -b, --cookie and -c, --cookie-jar. --keepalive-time <seconds> This option sets the time a connection needs to remain idle before sending keepalive probes and the time between individual keepalive probes. It is currently effective on operating systems offering the TCP_KEEPIDLE and TCP_KEEPINTVL socket options (meaning Linux, recent AIX, HP-UX and more). Keepalives are used by the TCP stack to detect broken networks on idle connections. The number of missed keepalive probes before declaring the connection down is OS dependent and is commonly 9 or 10. This option has no effect if --no-keepalive is used. If unspecified, the option defaults to 60 seconds. If --keepalive-time is provided several times, the last set value is used. Example: curl --keepalive-time 20 https://example.com See also --no-keepalive and -m, --max-time. --key <key> (TLS SSH) Private key file name. Allows you to provide your private key in this separate file. For SSH, if not specified, curl tries the following candidates in order: '~/.ssh/id_rsa', '~/.ssh/id_dsa', './id_rsa', './id_dsa'. If curl is built against OpenSSL library, and the engine pkcs11 is available, then a PKCS#11 URI (RFC 7512) can be used to specify a private key located in a PKCS#11 device. A string beginning with "pkcs11:" is interpreted as a PKCS#11 URI. If a PKCS#11 URI is provided, then the --engine option is set as "pkcs11" if none was provided and the --key-type option is set as "ENG" if none was provided. If curl is built against Secure Transport or Schannel then this option is ignored for TLS protocols (HTTPS, etc). Those backends expect the private key to be already present in the keychain or PKCS#12 file containing the certificate. If --key is provided several times, the last set value is used. Example: curl --cert certificate --key here https://example.com See also --key-type and -E, --cert. --key-type <type> (TLS) Private key file type. Specify which type your --key provided private key is. DER, PEM, and ENG are supported. If not specified, PEM is assumed. If --key-type is provided several times, the last set value is used. Example: curl --key-type DER --key here https://example.com See also --key. --krb <level> (FTP) Enable Kerberos authentication and use. The level must be entered and should be one of 'clear', 'safe', 'confidential', or 'private'. Should you use a level that is not one of these, 'private' is used. If --krb is provided several times, the last set value is used. Example: curl --krb clear ftp://example.com/ See also --delegation and --ssl. --krb requires that the underlying libcurl was built to support Kerberos. --libcurl <file> Append this option to any ordinary curl command line, and you get libcurl-using C source code written to the file that does the equivalent of what your command-line operation does! This option is global and does not need to be specified for each use of --next. If --libcurl is provided several times, the last set value is used. Example: curl --libcurl client.c https://example.com See also -v, --verbose. --limit-rate <speed> Specify the maximum transfer rate you want curl to use - for both downloads and uploads. This feature is useful if you have a limited pipe and you would like your transfer not to use your entire bandwidth. To make it slower than it otherwise would be. The given speed is measured in bytes/second, unless a suffix is appended. Appending 'k' or 'K' counts the number as kilobytes, 'm' or 'M' makes it megabytes, while 'g' or 'G' makes it gigabytes. The suffixes (k, M, G, T, P) are 1024 based. For example 1k is 1024. Examples: 200K, 3m and 1G. The rate limiting logic works on averaging the transfer speed to no more than the set threshold over a period of multiple seconds. If you also use the -Y, --speed-limit option, that option takes precedence and might cripple the rate-limiting slightly, to help keeping the speed-limit logic working. If --limit-rate is provided several times, the last set value is used. Examples: curl --limit-rate 100K https://example.com curl --limit-rate 1000 https://example.com curl --limit-rate 10M https://example.com See also --rate, -Y, --speed-limit and -y, --speed-time. -l, --list-only (FTP POP3 SFTP) (FTP) When listing an FTP directory, this switch forces a name-only view. This is especially useful if the user wants to machine-parse the contents of an FTP directory since the normal directory view does not use a standard look or format. When used like this, the option causes an NLST command to be sent to the server instead of LIST. Note: Some FTP servers list only files in their response to NLST; they do not include sub-directories and symbolic links. (SFTP) When listing an SFTP directory, this switch forces a name-only view, one per line. This is especially useful if the user wants to machine-parse the contents of an SFTP directory since the normal directory view provides more information than just file names. (POP3) When retrieving a specific email from POP3, this switch forces a LIST command to be performed instead of RETR. This is particularly useful if the user wants to see if a specific message-id exists on the server and what size it is. Note: When combined with -X, --request, this option can be used to send a UIDL command instead, so the user may use the email's unique identifier rather than its message-id to make the request. Providing -l, --list-only multiple times has no extra effect. Disable it again with --no-list-only. Example: curl --list-only ftp://example.com/dir/ See also -Q, --quote and -X, --request. --local-port <num/range> Set a preferred single number or range (FROM-TO) of local port numbers to use for the connection(s). Note that port numbers by nature are a scarce resource so setting this range to something too narrow might cause unnecessary connection setup failures. If --local-port is provided several times, the last set value is used. Example: curl --local-port 1000-3000 https://example.com See also -g, --globoff. -L, --location (HTTP) If the server reports that the requested page has moved to a different location (indicated with a Location: header and a 3XX response code), this option makes curl redo the request on the new place. If used together with -i, --include or -I, --head, headers from all requested pages are shown. When authentication is used, curl only sends its credentials to the initial host. If a redirect takes curl to a different host, it does not get the user+password pass on. See also --location-trusted on how to change this. Limit the amount of redirects to follow by using the --max-redirs option. When curl follows a redirect and if the request is a POST, it sends the following request with a GET if the HTTP response was 301, 302, or 303. If the response code was any other 3xx code, curl resends the following request using the same unmodified method. You can tell curl to not change POST requests to GET after a 30x response by using the dedicated options for that: --post301, --post302 and --post303. The method set with -X, --request overrides the method curl would otherwise select to use. Providing -L, --location multiple times has no extra effect. Disable it again with --no-location. Example: curl -L https://example.com See also --resolve and --alt-svc. --location-trusted (HTTP) Like -L, --location, but allows sending the name + password to all hosts that the site may redirect to. This may or may not introduce a security breach if the site redirects you to a site to which you send your authentication info (which is plaintext in the case of HTTP Basic authentication). Providing --location-trusted multiple times has no extra effect. Disable it again with --no-location-trusted. Example: curl --location-trusted -u user:password https://example.com See also -u, --user. --login-options <options> (IMAP LDAP POP3 SMTP) Specify the login options to use during server authentication. You can use login options to specify protocol specific options that may be used during authentication. At present only IMAP, POP3 and SMTP support login options. For more information about login options please see RFC 2384, RFC 5092 and the IETF draft https://datatracker.ietf.org/doc/html/draft-earhart-url-smtp-00. Since 8.2.0, IMAP supports the login option "AUTH=+LOGIN". With this option, curl uses the plain (not SASL) LOGIN IMAP command even if the server advertises SASL authentication. Care should be taken in using this option, as it sends your password over the network in plain text. This does not work if the IMAP server disables the plain LOGIN (e.g. to prevent password snooping). If --login-options is provided several times, the last set value is used. Example: curl --login-options 'AUTH=*' imap://example.com See also -u, --user. --mail-auth <address> (SMTP) Specify a single address. This is used to specify the authentication address (identity) of a submitted message that is being relayed to another server. If --mail-auth is provided several times, the last set value is used. Example: curl --mail-auth user@example.come -T mail smtp://example.com/ See also --mail-rcpt and --mail-from. --mail-from <address> (SMTP) Specify a single address that the given mail should get sent from. If --mail-from is provided several times, the last set value is used. Example: curl --mail-from user@example.com -T mail smtp://example.com/ See also --mail-rcpt and --mail-auth. --mail-rcpt <address> (SMTP) Specify a single email address, user name or mailing list name. Repeat this option several times to send to multiple recipients. When performing an address verification (VRFY command), the recipient should be specified as the user name or user name and domain (as per Section 3.5 of RFC 5321). When performing a mailing list expand (EXPN command), the recipient should be specified using the mailing list name, such as "Friends" or "London-Office". --mail-rcpt can be used several times in a command line Example: curl --mail-rcpt user@example.net smtp://example.com See also --mail-rcpt-allowfails. --mail-rcpt-allowfails (SMTP) When sending data to multiple recipients, by default curl aborts SMTP conversation if at least one of the recipients causes RCPT TO command to return an error. The default behavior can be changed by passing --mail-rcpt-allowfails command-line option which makes curl ignore errors and proceed with the remaining valid recipients. If all recipients trigger RCPT TO failures and this flag is specified, curl still aborts the SMTP conversation and returns the error received from to the last RCPT TO command. Providing --mail-rcpt-allowfails multiple times has no extra effect. Disable it again with --no-mail-rcpt-allowfails. Example: curl --mail-rcpt-allowfails --mail-rcpt dest@example.com smtp://example.com See also --mail-rcpt. Added in 7.69.0. -M, --manual Manual. Display the huge help text. Example: curl --manual See also -v, --verbose, --libcurl and --trace. --max-filesize <bytes> (FTP HTTP MQTT) Specify the maximum size (in bytes) of a file to download. If the file requested is larger than this value, the transfer does not start and curl returns with exit code 63. A size modifier may be used. For example, Appending 'k' or 'K' counts the number as kilobytes, 'm' or 'M' makes it megabytes, while 'g' or 'G' makes it gigabytes. Examples: 200K, 3m and 1G. (Added in 7.58.0) NOTE: before curl 8.4.0, when the file size is not known prior to download, for such files this option has no effect even if the file transfer ends up being larger than this given limit. Starting with curl 8.4.0, this option aborts the transfer if it reaches the threshold during transfer. If --max-filesize is provided several times, the last set value is used. Example: curl --max-filesize 100K https://example.com See also --limit-rate. --max-redirs <num> (HTTP) Set maximum number of redirections to follow. When -L, --location is used, to prevent curl from following too many redirects, by default, the limit is set to 50 redirects. Set this option to -1 to make it unlimited. If --max-redirs is provided several times, the last set value is used. Example: curl --max-redirs 3 --location https://example.com See also -L, --location. -m, --max-time <fractional seconds> Maximum time in seconds that you allow each transfer to take. This is useful for preventing your batch jobs from hanging for hours due to slow networks or links going down. This option accepts decimal values. If you enable retrying the transfer (--retry) then the maximum time counter is reset each time the transfer is retried. You can use --retry-max-time to limit the retry time. The decimal value needs to provided using a dot (.) as decimal separator - not the local version even if it might be using another separator. If -m, --max-time is provided several times, the last set value is used. Examples: curl --max-time 10 https://example.com curl --max-time 2.92 https://example.com See also --connect-timeout and --retry-max-time. --metalink This option was previously used to specify a Metalink resource. Metalink support is disabled in curl for security reasons (added in 7.78.0). If --metalink is provided several times, the last set value is used. Example: curl --metalink file https://example.com See also -Z, --parallel. --negotiate (HTTP) Enables Negotiate (SPNEGO) authentication. This option requires a library built with GSS-API or SSPI support. Use -V, --version to see if your curl supports GSS-API/SSPI or SPNEGO. When using this option, you must also provide a fake -u, --user option to activate the authentication code properly. Sending a '-u :' is enough as the user name and password from the -u, --user option are not actually used. Providing --negotiate multiple times has no extra effect. Example: curl --negotiate -u : https://example.com See also --basic, --ntlm, --anyauth and --proxy-negotiate. -n, --netrc Makes curl scan the .netrc file in the user's home directory for login name and password. This is typically used for FTP on Unix. If used with HTTP, curl enables user authentication. See netrc(5) and ftp(1) for details on the file format. Curl does not complain if that file does not have the right permissions (it should be neither world- nor group-readable). The environment variable "HOME" is used to find the home directory. On Windows two filenames in the home directory are checked: .netrc and _netrc, preferring the former. Older versions on Windows checked for _netrc only. A quick and simple example of how to setup a .netrc to allow curl to FTP to the machine host.domain.com with user name 'myself' and password 'secret' could look similar to: machine host.domain.com login myself password secret Providing -n, --netrc multiple times has no extra effect. Disable it again with --no-netrc. Example: curl --netrc https://example.com See also --netrc-file, -K, --config and -u, --user. This option is mutually exclusive to --netrc-file and --netrc-optional. --netrc-file <filename> This option is similar to -n, --netrc, except that you provide the path (absolute or relative) to the netrc file that curl should use. You can only specify one netrc file per invocation. It abides by --netrc-optional if specified. If --netrc-file is provided several times, the last set value is used. Example: curl --netrc-file netrc https://example.com See also -n, --netrc, -u, --user and -K, --config. This option is mutually exclusive to -n, --netrc. --netrc-optional Similar to -n, --netrc, but this option makes the .netrc usage optional and not mandatory as the -n, --netrc option does. Providing --netrc-optional multiple times has no extra effect. Disable it again with --no-netrc-optional. Example: curl --netrc-optional https://example.com See also --netrc-file. This option is mutually exclusive to -n, --netrc. -:, --next Tells curl to use a separate operation for the following URL and associated options. This allows you to send several URL requests, each with their own specific options, for example, such as different user names or custom requests for each. -:, --next resets all local options and only global ones have their values survive over to the operation following the -:, --next instruction. Global options include -v, --verbose, --trace, --trace-ascii and --fail-early. For example, you can do both a GET and a POST in a single command line: curl www1.example.com --next -d postthis www2.example.com -:, --next can be used several times in a command line Examples: curl https://example.com --next -d postthis www2.example.com curl -I https://example.com --next https://example.net/ See also -Z, --parallel and -K, --config. --no-alpn (HTTPS) Disable the ALPN TLS extension. ALPN is enabled by default if libcurl was built with an SSL library that supports ALPN. ALPN is used by a libcurl that supports HTTP/2 to negotiate HTTP/2 support with the server during https sessions. Note that this is the negated option name documented. You can use --alpn to enable ALPN. Providing --no-alpn multiple times has no extra effect. Disable it again with --alpn. Example: curl --no-alpn https://example.com See also --no-npn and --http2. --no-alpn requires that the underlying libcurl was built to support TLS. -N, --no-buffer Disables the buffering of the output stream. In normal work situations, curl uses a standard buffered output stream that has the effect that it outputs the data in chunks, not necessarily exactly when the data arrives. Using this option disables that buffering. Note that this is the negated option name documented. You can use --buffer to enable buffering again. Providing -N, --no-buffer multiple times has no extra effect. Disable it again with --buffer. Example: curl --no-buffer https://example.com See also -#, --progress-bar. --no-clobber When used in conjunction with the -o, --output, -J, --remote-header-name, -O, --remote-name, or --remote-name-all options, curl avoids overwriting files that already exist. Instead, a dot and a number gets appended to the name of the file that would be created, up to filename.100 after which it does not create any file. Note that this is the negated option name documented. You can thus use --clobber to enforce the clobbering, even if -J, --remote-header-name is specified. Providing --no-clobber multiple times has no extra effect. Disable it again with --clobber. Example: curl --no-clobber --output local/dir/file https://example.com See also -o, --output and -O, --remote-name. Added in 7.83.0. --no-keepalive Disables the use of keepalive messages on the TCP connection. curl otherwise enables them by default. Note that this is the negated option name documented. You can thus use --keepalive to enforce keepalive. Providing --no-keepalive multiple times has no extra effect. Disable it again with --keepalive. Example: curl --no-keepalive https://example.com See also --keepalive-time. --no-npn (HTTPS) curl never uses NPN, this option has no effect (added in 7.86.0). Disable the NPN TLS extension. NPN is enabled by default if libcurl was built with an SSL library that supports NPN. NPN is used by a libcurl that supports HTTP/2 to negotiate HTTP/2 support with the server during https sessions. Providing --no-npn multiple times has no extra effect. Disable it again with --npn. Example: curl --no-npn https://example.com See also --no-alpn and --http2. --no-npn requires that the underlying libcurl was built to support TLS. --no-progress-meter Option to switch off the progress meter output without muting or otherwise affecting warning and informational messages like -s, --silent does. Note that this is the negated option name documented. You can thus use --progress-meter to enable the progress meter again. Providing --no-progress-meter multiple times has no extra effect. Disable it again with --progress-meter. Example: curl --no-progress-meter -o store https://example.com See also -v, --verbose and -s, --silent. Added in 7.67.0. --no-sessionid (TLS) Disable curl's use of SSL session-ID caching. By default all transfers are done using the cache. Note that while nothing should ever get hurt by attempting to reuse SSL session-IDs, there seem to be broken SSL implementations in the wild that may require you to disable this in order for you to succeed. Note that this is the negated option name documented. You can thus use --sessionid to enforce session-ID caching. Providing --no-sessionid multiple times has no extra effect. Disable it again with --sessionid. Example: curl --no-sessionid https://example.com See also -k, --insecure. --noproxy <no-proxy-list> Comma-separated list of hosts for which not to use a proxy, if one is specified. The only wildcard is a single * character, which matches all hosts, and effectively disables the proxy. Each name in this list is matched as either a domain which contains the hostname, or the hostname itself. For example, local.com would match local.com, local.com:80, and www.local.com, but not www.notlocal.com. This option overrides the environment variables that disable the proxy ('no_proxy' and 'NO_PROXY') (added in 7.53.0). If there is an environment variable disabling a proxy, you can set the no proxy list to "" to override it. IP addresses specified to this option can be provided using CIDR notation (added in 7.86.0): an appended slash and number specifies the number of "network bits" out of the address to use in the comparison. For example "192.168.0.0/16" would match all addresses starting with "192.168". If --noproxy is provided several times, the last set value is used. Example: curl --noproxy "www.example" https://example.com See also -x, --proxy. --ntlm (HTTP) Enables NTLM authentication. The NTLM authentication method was designed by Microsoft and is used by IIS web servers. It is a proprietary protocol, reverse-engineered by clever people and implemented in curl based on their efforts. This kind of behavior should not be endorsed, you should encourage everyone who uses NTLM to switch to a public and documented authentication method instead, such as Digest. If you want to enable NTLM for your proxy authentication, then use --proxy-ntlm. Providing --ntlm multiple times has no extra effect. Example: curl --ntlm -u user:password https://example.com See also --proxy-ntlm. --ntlm requires that the underlying libcurl was built to support TLS. This option is mutually exclusive to --basic and --negotiate and --digest and --anyauth. --ntlm-wb (HTTP) Enables NTLM much in the style --ntlm does, but hand over the authentication to the separate binary ntlmauth application that is executed when needed. Providing --ntlm-wb multiple times has no extra effect. Example: curl --ntlm-wb -u user:password https://example.com See also --ntlm and --proxy-ntlm. --oauth2-bearer <token> (IMAP LDAP POP3 SMTP HTTP) Specify the Bearer Token for OAUTH 2.0 server authentication. The Bearer Token is used in conjunction with the user name which can be specified as part of the --url or -u, --user options. The Bearer Token and user name are formatted according to RFC 6750. If --oauth2-bearer is provided several times, the last set value is used. Example: curl --oauth2-bearer "mF_9.B5f-4.1JqM" https://example.com See also --basic, --ntlm and --digest. -o, --output <file> Write output to <file> instead of stdout. If you are using {} or [] to fetch multiple documents, you should quote the URL and you can use '#' followed by a number in the <file> specifier. That variable is replaced with the current string for the URL being fetched. Like in: curl "http://{one,two}.example.com" -o "file_#1.txt" or use several variables like: curl "http://{site,host}.host[1-5].example" -o "#1_#2" You may use this option as many times as the number of URLs you have. For example, if you specify two URLs on the same command line, you can use it like this: curl -o aa example.com -o bb example.net and the order of the -o options and the URLs does not matter, just that the first -o is for the first URL and so on, so the above command line can also be written as curl example.com example.net -o aa -o bb See also the --create-dirs option to create the local directories dynamically. Specifying the output as '-' (a single dash) passes the output to stdout. To suppress response bodies, you can redirect output to /dev/null: curl example.com -o /dev/null Or for Windows: curl example.com -o nul -o, --output can be used several times in a command line Examples: curl -o file https://example.com curl "http://{one,two}.example.com" -o "file_#1.txt" curl "http://{site,host}.host[1-5].example" -o "#1_#2" curl -o file https://example.com -o file2 https://example.net See also -O, --remote-name, --remote-name-all and -J, --remote-header-name. --output-dir <dir> This option specifies the directory in which files should be stored, when -O, --remote-name or -o, --output are used. The given output directory is used for all URLs and output options on the command line, up until the first -:, --next. If the specified target directory does not exist, the operation fails unless --create-dirs is also used. If --output-dir is provided several times, the last set value is used. Example: curl --output-dir "tmp" -O https://example.com See also -O, --remote-name and -J, --remote-header-name. Added in 7.73.0. -Z, --parallel Makes curl perform its transfers in parallel as compared to the regular serial manner. This option is global and does not need to be specified for each use of --next. Providing -Z, --parallel multiple times has no extra effect. Disable it again with --no-parallel. Example: curl --parallel https://example.com -o file1 https://example.com -o file2 See also -:, --next and -v, --verbose. Added in 7.66.0. --parallel-immediate When doing parallel transfers, this option instructs curl that it should rather prefer opening up more connections in parallel at once rather than waiting to see if new transfers can be added as multiplexed streams on another connection. This option is global and does not need to be specified for each use of --next. Providing --parallel-immediate multiple times has no extra effect. Disable it again with --no-parallel-immediate. Example: curl --parallel-immediate -Z https://example.com -o file1 https://example.com -o file2 See also -Z, --parallel and --parallel-max. Added in 7.68.0. --parallel-max <num> When asked to do parallel transfers, using -Z, --parallel, this option controls the maximum amount of transfers to do simultaneously. This option is global and does not need to be specified for each use of -:, --next. The default is 50. If --parallel-max is provided several times, the last set value is used. Example: curl --parallel-max 100 -Z https://example.com ftp://example.com/ See also -Z, --parallel. Added in 7.66.0. --pass <phrase> (SSH TLS) Passphrase for the private key. If --pass is provided several times, the last set value is used. Example: curl --pass secret --key file https://example.com See also --key and -u, --user. --path-as-is Tell curl to not handle sequences of /../ or /./ in the given URL path. Normally curl squashes or merges them according to standards but with this option set you tell it not to do that. Providing --path-as-is multiple times has no extra effect. Disable it again with --no-path-as-is. Example: curl --path-as-is https://example.com/../../etc/passwd See also --request-target. --pinnedpubkey <hashes> (TLS) Tells curl to use the specified public key file (or hashes) to verify the peer. This can be a path to a file which contains a single public key in PEM or DER format, or any number of base64 encoded sha256 hashes preceded by 'sha256//' and separated by ';'. When negotiating a TLS or SSL connection, the server sends a certificate indicating its identity. A public key is extracted from this certificate and if it does not exactly match the public key provided to this option, curl aborts the connection before sending or receiving any data. This option is independent of option -k, --insecure. If you use both options together then the peer is still verified by public key. PEM/DER support: OpenSSL and GnuTLS, wolfSSL (added in 7.43.0), mbedTLS , Secure Transport macOS 10.7+/iOS 10+ (7.54.1), Schannel (7.58.1) sha256 support: OpenSSL, GnuTLS and wolfSSL, mbedTLS (added in 7.47.0), Secure Transport macOS 10.7+/iOS 10+ (7.54.1), Schannel (7.58.1) Other SSL backends not supported. If --pinnedpubkey is provided several times, the last set value is used. Examples: curl --pinnedpubkey keyfile https://example.com curl --pinnedpubkey 'sha256//ce118b51897f4452dc' https://example.com See also --hostpubsha256. --post301 (HTTP) Tells curl to respect RFC 7231/6.4.2 and not convert POST requests into GET requests when following a 301 redirection. The non-RFC behavior is ubiquitous in web browsers, so curl does the conversion by default to maintain consistency. However, a server may require a POST to remain a POST after such a redirection. This option is meaningful only when using -L, --location. Providing --post301 multiple times has no extra effect. Disable it again with --no-post301. Example: curl --post301 --location -d "data" https://example.com See also --post302, --post303 and -L, --location. --post302 (HTTP) Tells curl to respect RFC 7231/6.4.3 and not convert POST requests into GET requests when following a 302 redirection. The non-RFC behavior is ubiquitous in web browsers, so curl does the conversion by default to maintain consistency. However, a server may require a POST to remain a POST after such a redirection. This option is meaningful only when using -L, --location. Providing --post302 multiple times has no extra effect. Disable it again with --no-post302. Example: curl --post302 --location -d "data" https://example.com See also --post301, --post303 and -L, --location. --post303 (HTTP) Tells curl to violate RFC 7231/6.4.4 and not convert POST requests into GET requests when following 303 redirections. A server may require a POST to remain a POST after a 303 redirection. This option is meaningful only when using -L, --location. Providing --post303 multiple times has no extra effect. Disable it again with --no-post303. Example: curl --post303 --location -d "data" https://example.com See also --post302, --post301 and -L, --location. --preproxy [protocol://]host[:port] Use the specified SOCKS proxy before connecting to an HTTP or HTTPS -x, --proxy. In such a case curl first connects to the SOCKS proxy and then connects (through SOCKS) to the HTTP or HTTPS proxy. Hence pre proxy. The pre proxy string should be specified with a protocol:// prefix to specify alternative proxy protocols. Use socks4://, socks4a://, socks5:// or socks5h:// to request the specific SOCKS version to be used. No protocol specified makes curl default to SOCKS4. If the port number is not specified in the proxy string, it is assumed to be 1080. User and password that might be provided in the proxy string are URL decoded by curl. This allows you to pass in special characters such as @ by using %40 or pass in a colon with %3a. If --preproxy is provided several times, the last set value is used. Example: curl --preproxy socks5://proxy.example -x http://http.example https://example.com See also -x, --proxy and --socks5. Added in 7.52.0. -#, --progress-bar Make curl display transfer progress as a simple progress bar instead of the standard, more informational, meter. This progress bar draws a single line of '#' characters across the screen and shows a percentage if the transfer size is known. For transfers without a known size, there is a space ship (-=o=-) that moves back and forth but only while data is being transferred, with a set of flying hash sign symbols on top. This option is global and does not need to be specified for each use of --next. Providing -#, --progress-bar multiple times has no extra effect. Disable it again with --no-progress-bar. Example: curl -# -O https://example.com See also --styled-output. --proto <protocols> Tells curl to limit what protocols it may use for transfers. Protocols are evaluated left to right, are comma separated, and are each a protocol name or 'all', optionally prefixed by zero or more modifiers. Available modifiers are: + Permit this protocol in addition to protocols already permitted (this is the default if no modifier is used). - Deny this protocol, removing it from the list of protocols already permitted. = Permit only this protocol (ignoring the list already permitted), though subject to later modification by subsequent entries in the comma separated list. For example: --proto -ftps uses the default protocols, but disables ftps --proto -all,https,+http only enables http and https --proto =http,https also only enables http and https Unknown and disabled protocols produce a warning. This allows scripts to safely rely on being able to disable potentially dangerous protocols, without relying upon support for that protocol being built into curl to avoid an error. This option can be used multiple times, in which case the effect is the same as concatenating the protocols into one instance of the option. If --proto is provided several times, the last set value is used. Example: curl --proto =http,https,sftp https://example.com See also --proto-redir and --proto-default. --proto-default <protocol> Tells curl to use protocol for any URL missing a scheme name. An unknown or unsupported protocol causes error CURLE_UNSUPPORTED_PROTOCOL (1). This option does not change the default proxy protocol (http). Without this option set, curl guesses protocol based on the host name, see --url for details. If --proto-default is provided several times, the last set value is used. Example: curl --proto-default https ftp.example.com See also --proto and --proto-redir. --proto-redir <protocols> Tells curl to limit what protocols it may use on redirect. Protocols denied by --proto are not overridden by this option. See --proto for how protocols are represented. Example, allow only HTTP and HTTPS on redirect: curl --proto-redir -all,http,https http://example.com By default curl only allows HTTP, HTTPS, FTP and FTPS on redirects (added in 7.65.2). Specifying all or +all enables all protocols on redirects, which is not good for security. If --proto-redir is provided several times, the last set value is used. Example: curl --proto-redir =http,https https://example.com See also --proto. -x, --proxy [protocol://]host[:port] Use the specified proxy. The proxy string can be specified with a protocol:// prefix. No protocol specified or http:// it is treated as an HTTP proxy. Use socks4://, socks4a://, socks5:// or socks5h:// to request a specific SOCKS version to be used. Unix domain sockets are supported for socks proxy. Set localhost for the host part. e.g. socks5h://localhost/path/to/socket.sock HTTPS proxy support works set with the https:// protocol prefix for OpenSSL and GnuTLS (added in 7.52.0). It also works for BearSSL, mbedTLS, rustls, Schannel, Secure Transport and wolfSSL (added in 7.87.0). Unrecognized and unsupported proxy protocols cause an error (added in 7.52.0). Ancient curl versions ignored unknown schemes and used http:// instead. If the port number is not specified in the proxy string, it is assumed to be 1080. This option overrides existing environment variables that set the proxy to use. If there is an environment variable setting a proxy, you can set proxy to "" to override it. All operations that are performed over an HTTP proxy are transparently converted to HTTP. It means that certain protocol specific operations might not be available. This is not the case if you can tunnel through the proxy, as one with the -p, --proxytunnel option. User and password that might be provided in the proxy string are URL decoded by curl. This allows you to pass in special characters such as @ by using %40 or pass in a colon with %3a. The proxy host can be specified the same way as the proxy environment variables, including the protocol prefix (http://) and the embedded user + password. When a proxy is used, the active FTP mode as set with -P, --ftp-port, cannot be used. If -x, --proxy is provided several times, the last set value is used. Example: curl --proxy http://proxy.example https://example.com See also --socks5 and --proxy-basic. --proxy-anyauth Tells curl to pick a suitable authentication method when communicating with the given HTTP proxy. This might cause an extra request/response round-trip. Providing --proxy-anyauth multiple times has no extra effect. Example: curl --proxy-anyauth --proxy-user user:passwd -x proxy https://example.com See also -x, --proxy, --proxy-basic and --proxy-digest. --proxy-basic Tells curl to use HTTP Basic authentication when communicating with the given proxy. Use --basic for enabling HTTP Basic with a remote host. Basic is the default authentication method curl uses with proxies. Providing --proxy-basic multiple times has no extra effect. Example: curl --proxy-basic --proxy-user user:passwd -x proxy https://example.com See also -x, --proxy, --proxy-anyauth and --proxy-digest. --proxy-ca-native (TLS) Tells curl to use the CA store from the native operating system to verify the HTTPS proxy. By default, curl uses a CA store provided in a single file or directory, but when using this option it interfaces the operating system's own vault. This option only works for curl on Windows when built to use OpenSSL. When curl on Windows is built to use Schannel, this feature is implied and curl then only uses the native CA store. curl built with wolfSSL also supports this option (added in 8.3.0). Providing --proxy-ca-native multiple times has no extra effect. Disable it again with --no-proxy-ca-native. Example: curl --ca-native https://example.com See also --cacert, --capath and -k, --insecure. Added in 8.2.0. --proxy-cacert <file> Same as --cacert but used in HTTPS proxy context. If --proxy-cacert is provided several times, the last set value is used. Example: curl --proxy-cacert CA-file.txt -x https://proxy https://example.com See also --proxy-capath, --cacert, --capath and -x, --proxy. Added in 7.52.0. --proxy-capath <dir> Same as --capath but used in HTTPS proxy context. If --proxy-capath is provided several times, the last set value is used. Example: curl --proxy-capath /local/directory -x https://proxy https://example.com See also --proxy-cacert, -x, --proxy and --capath. Added in 7.52.0. --proxy-cert <cert[:passwd]> Same as -E, --cert but used in HTTPS proxy context. If --proxy-cert is provided several times, the last set value is used. Example: curl --proxy-cert file -x https://proxy https://example.com See also --proxy-cert-type. Added in 7.52.0. --proxy-cert-type <type> Same as --cert-type but used in HTTPS proxy context. If --proxy-cert-type is provided several times, the last set value is used. Example: curl --proxy-cert-type PEM --proxy-cert file -x https://proxy https://example.com See also --proxy-cert. Added in 7.52.0. --proxy-ciphers <list> Same as --ciphers but used in HTTPS proxy context. Specifies which ciphers to use in the connection to the HTTPS proxy. The list of ciphers must specify valid ciphers. Read up on SSL cipher list details on this URL: https://curl.se/docs/ssl-ciphers.html If --proxy-ciphers is provided several times, the last set value is used. Example: curl --proxy-ciphers ECDHE-ECDSA-AES256-CCM8 -x https://proxy https://example.com See also --ciphers, --curves and -x, --proxy. Added in 7.52.0. --proxy-crlfile <file> Same as --crlfile but used in HTTPS proxy context. If --proxy-crlfile is provided several times, the last set value is used. Example: curl --proxy-crlfile rejects.txt -x https://proxy https://example.com See also --crlfile and -x, --proxy. Added in 7.52.0. --proxy-digest Tells curl to use HTTP Digest authentication when communicating with the given proxy. Use --digest for enabling HTTP Digest with a remote host. Providing --proxy-digest multiple times has no extra effect. Example: curl --proxy-digest --proxy-user user:passwd -x proxy https://example.com See also -x, --proxy, --proxy-anyauth and --proxy-basic. --proxy-header <header/@file> (HTTP) Extra header to include in the request when sending HTTP to a proxy. You may specify any number of extra headers. This is the equivalent option to -H, --header but is for proxy communication only like in CONNECT requests when you want a separate header sent to the proxy to what is sent to the actual remote host. curl makes sure that each header you add/replace is sent with the proper end-of-line marker, you should thus not add that as a part of the header content: do not add newlines or carriage returns, they only mess things up for you. Headers specified with this option are not included in requests that curl knows are not be sent to a proxy. This option can take an argument in @filename style, which then adds a header for each line in the input file (added in 7.55.0). Using @- makes curl read the headers from stdin. This option can be used multiple times to add/replace/remove multiple headers. --proxy-header can be used several times in a command line Examples: curl --proxy-header "X-First-Name: Joe" -x http://proxy https://example.com curl --proxy-header "User-Agent: surprise" -x http://proxy https://example.com curl --proxy-header "Host:" -x http://proxy https://example.com See also -x, --proxy. --proxy-http2 (HTTP) Tells curl to try negotiate HTTP version 2 with an HTTPS proxy. The proxy might still only offer HTTP/1 and then curl sticks to using that version. This has no effect for any other kinds of proxies. Providing --proxy-http2 multiple times has no extra effect. Disable it again with --no-proxy-http2. Example: curl --proxy-http2 -x proxy https://example.com See also -x, --proxy. --proxy-http2 requires that the underlying libcurl was built to support HTTP/2. Added in 8.1.0. --proxy-insecure Same as -k, --insecure but used in HTTPS proxy context. Providing --proxy-insecure multiple times has no extra effect. Disable it again with --no-proxy-insecure. Example: curl --proxy-insecure -x https://proxy https://example.com See also -x, --proxy and -k, --insecure. Added in 7.52.0. --proxy-key <key> Same as --key but used in HTTPS proxy context. If --proxy-key is provided several times, the last set value is used. Example: curl --proxy-key here -x https://proxy https://example.com See also --proxy-key-type and -x, --proxy. Added in 7.52.0. --proxy-key-type <type> Same as --key-type but used in HTTPS proxy context. If --proxy-key-type is provided several times, the last set value is used. Example: curl --proxy-key-type DER --proxy-key here -x https://proxy https://example.com See also --proxy-key and -x, --proxy. Added in 7.52.0. --proxy-negotiate Tells curl to use HTTP Negotiate (SPNEGO) authentication when communicating with the given proxy. Use --negotiate for enabling HTTP Negotiate (SPNEGO) with a remote host. Providing --proxy-negotiate multiple times has no extra effect. Example: curl --proxy-negotiate --proxy-user user:passwd -x proxy https://example.com See also --proxy-anyauth and --proxy-basic. --proxy-ntlm Tells curl to use HTTP NTLM authentication when communicating with the given proxy. Use --ntlm for enabling NTLM with a remote host. Providing --proxy-ntlm multiple times has no extra effect. Example: curl --proxy-ntlm --proxy-user user:passwd -x http://proxy https://example.com See also --proxy-negotiate and --proxy-anyauth. --proxy-pass <phrase> Same as --pass but used in HTTPS proxy context. If --proxy-pass is provided several times, the last set value is used. Example: curl --proxy-pass secret --proxy-key here -x https://proxy https://example.com See also -x, --proxy and --proxy-key. Added in 7.52.0. --proxy-pinnedpubkey <hashes> (TLS) Tells curl to use the specified public key file (or hashes) to verify the proxy. This can be a path to a file which contains a single public key in PEM or DER format, or any number of base64 encoded sha256 hashes preceded by 'sha256//' and separated by ';'. When negotiating a TLS or SSL connection, the server sends a certificate indicating its identity. A public key is extracted from this certificate and if it does not exactly match the public key provided to this option, curl aborts the connection before sending or receiving any data. If --proxy-pinnedpubkey is provided several times, the last set value is used. Examples: curl --proxy-pinnedpubkey keyfile https://example.com curl --proxy-pinnedpubkey 'sha256//ce118b51897f4452dc' https://example.com See also --pinnedpubkey and -x, --proxy. Added in 7.59.0. --proxy-service-name <name> This option allows you to change the service name for proxy negotiation. If --proxy-service-name is provided several times, the last set value is used. Example: curl --proxy-service-name "shrubbery" -x proxy https://example.com See also --service-name and -x, --proxy. --proxy-ssl-allow-beast Same as --ssl-allow-beast but used in HTTPS proxy context. Providing --proxy-ssl-allow-beast multiple times has no extra effect. Disable it again with --no-proxy-ssl-allow-beast. Example: curl --proxy-ssl-allow-beast -x https://proxy https://example.com See also --ssl-allow-beast and -x, --proxy. Added in 7.52.0. --proxy-ssl-auto-client-cert Same as --ssl-auto-client-cert but used in HTTPS proxy context. Providing --proxy-ssl-auto-client-cert multiple times has no extra effect. Disable it again with --no-proxy-ssl-auto-client-cert. Example: curl --proxy-ssl-auto-client-cert -x https://proxy https://example.com See also --ssl-auto-client-cert and -x, --proxy. Added in 7.77.0. --proxy-tls13-ciphers <ciphersuite list> (TLS) Specifies which cipher suites to use in the connection to your HTTPS proxy when it negotiates TLS 1.3. The list of ciphers suites must specify valid ciphers. Read up on TLS 1.3 cipher suite details on this URL: https://curl.se/docs/ssl-ciphers.html This option is currently used only when curl is built to use OpenSSL 1.1.1 or later. If you are using a different SSL backend you can try setting TLS 1.3 cipher suites by using the --proxy-ciphers option. If --proxy-tls13-ciphers is provided several times, the last set value is used. Example: curl --proxy-tls13-ciphers TLS_AES_128_GCM_SHA256 -x proxy https://example.com See also --tls13-ciphers, --curves and --proxy-ciphers. Added in 7.61.0. --proxy-tlsauthtype <type> Same as --tlsauthtype but used in HTTPS proxy context. If --proxy-tlsauthtype is provided several times, the last set value is used. Example: curl --proxy-tlsauthtype SRP -x https://proxy https://example.com See also -x, --proxy and --proxy-tlsuser. Added in 7.52.0. --proxy-tlspassword <string> Same as --tlspassword but used in HTTPS proxy context. If --proxy-tlspassword is provided several times, the last set value is used. Example: curl --proxy-tlspassword passwd -x https://proxy https://example.com See also -x, --proxy and --proxy-tlsuser. Added in 7.52.0. --proxy-tlsuser <name> Same as --tlsuser but used in HTTPS proxy context. If --proxy-tlsuser is provided several times, the last set value is used. Example: curl --proxy-tlsuser smith -x https://proxy https://example.com See also -x, --proxy and --proxy-tlspassword. Added in 7.52.0. --proxy-tlsv1 Same as -1, --tlsv1 but used in HTTPS proxy context. Providing --proxy-tlsv1 multiple times has no extra effect. Example: curl --proxy-tlsv1 -x https://proxy https://example.com See also -x, --proxy. Added in 7.52.0. -U, --proxy-user <user:password> Specify the user name and password to use for proxy authentication. If you use a Windows SSPI-enabled curl binary and do either Negotiate or NTLM authentication then you can tell curl to select the user name and password from your environment by specifying a single colon with this option: "-U :". On systems where it works, curl hides the given option argument from process listings. This is not enough to protect credentials from possibly getting seen by other users on the same system as they still are visible for a moment before cleared. Such sensitive data should be retrieved from a file instead or similar and never used in clear text in a command line. If -U, --proxy-user is provided several times, the last set value is used. Example: curl --proxy-user name:pwd -x proxy https://example.com See also --proxy-pass. --proxy1.0 <host[:port]> Use the specified HTTP 1.0 proxy. If the port number is not specified, it is assumed at port 1080. The only difference between this and the HTTP proxy option -x, --proxy, is that attempts to use CONNECT through the proxy specifies an HTTP 1.0 protocol instead of the default HTTP 1.1. Providing --proxy1.0 multiple times has no extra effect. Example: curl --proxy1.0 -x http://proxy https://example.com See also -x, --proxy, --socks5 and --preproxy. -p, --proxytunnel When an HTTP proxy is used -x, --proxy, this option makes curl tunnel the traffic through the proxy. The tunnel approach is made with the HTTP proxy CONNECT request and requires that the proxy allows direct connect to the remote port number curl wants to tunnel through to. To suppress proxy CONNECT response headers when curl is set to output headers use --suppress-connect-headers. Providing -p, --proxytunnel multiple times has no extra effect. Disable it again with --no-proxytunnel. Example: curl --proxytunnel -x http://proxy https://example.com See also -x, --proxy. --pubkey <key> (SFTP SCP) Public key file name. Allows you to provide your public key in this separate file. curl attempts to automatically extract the public key from the private key file, so passing this option is generally not required. Note that this public key extraction requires libcurl to be linked against a copy of libssh2 1.2.8 or higher that is itself linked against OpenSSL. If --pubkey is provided several times, the last set value is used. Example: curl --pubkey file.pub sftp://example.com/ See also --pass. -Q, --quote <command> (FTP SFTP) Send an arbitrary command to the remote FTP or SFTP server. Quote commands are sent BEFORE the transfer takes place (just after the initial PWD command in an FTP transfer, to be exact). To make commands take place after a successful transfer, prefix them with a dash '-'. (FTP only) To make commands be sent after curl has changed the working directory, just before the file transfer command(s), prefix the command with a '+'. This is not performed when a directory listing is performed. You may specify any number of commands. By default curl stops at first failure. To make curl continue even if the command fails, prefix the command with an asterisk (*). Otherwise, if the server returns failure for one of the commands, the entire operation is aborted. You must send syntactically correct FTP commands as RFC 959 defines to FTP servers, or one of the commands listed below to SFTP servers. SFTP is a binary protocol. Unlike for FTP, curl interprets SFTP quote commands itself before sending them to the server. File names may be quoted shell-style to embed spaces or special characters. Following is the list of all supported SFTP quote commands: atime date file The atime command sets the last access time of the file named by the file operand. The <date expression> can be all sorts of date strings, see the curl_getdate(3) man page for date expression details. (Added in 7.73.0) chgrp group file The chgrp command sets the group ID of the file named by the file operand to the group ID specified by the group operand. The group operand is a decimal integer group ID. chmod mode file The chmod command modifies the file mode bits of the specified file. The mode operand is an octal integer mode number. chown user file The chown command sets the owner of the file named by the file operand to the user ID specified by the user operand. The user operand is a decimal integer user ID. ln source_file target_file The ln and symlink commands create a symbolic link at the target_file location pointing to the source_file location. mkdir directory_name The mkdir command creates the directory named by the directory_name operand. mtime date file The mtime command sets the last modification time of the file named by the file operand. The <date expression> can be all sorts of date strings, see the curl_getdate(3) man page for date expression details. (Added in 7.73.0) pwd The pwd command returns the absolute path name of the current working directory. rename source target The rename command renames the file or directory named by the source operand to the destination path named by the target operand. rm file The rm command removes the file specified by the file operand. rmdir directory The rmdir command removes the directory entry specified by the directory operand, provided it is empty. symlink source_file target_file See ln. -Q, --quote can be used several times in a command line Example: curl --quote "DELE file" ftp://example.com/foo See also -X, --request. --random-file <file> Deprecated option. This option is ignored (added in 7.84.0). Prior to that it only had an effect on curl if built to use old versions of OpenSSL. Specify the path name to file containing random data. The data may be used to seed the random engine for SSL connections. If --random-file is provided several times, the last set value is used. Example: curl --random-file rubbish https://example.com See also --egd-file. -r, --range <range> (HTTP FTP SFTP FILE) Retrieve a byte range (i.e. a partial document) from an HTTP/1.1, FTP or SFTP server or a local FILE. Ranges can be specified in a number of ways. 0-499 specifies the first 500 bytes 500-999 specifies the second 500 bytes -500 specifies the last 500 bytes 9500- specifies the bytes from offset 9500 and forward 0-0,-1 specifies the first and last byte only(*)(HTTP) 100-199,500-599 specifies two separate 100-byte ranges(*) (HTTP) (*) = NOTE that this causes the server to reply with a multipart response, which is returned as-is by curl! Parsing or otherwise transforming this response is the responsibility of the caller. Only digit characters (0-9) are valid in the 'start' and 'stop' fields of the 'start-stop' range syntax. If a non-digit character is given in the range, the server's response is unspecified, depending on the server's configuration. Many HTTP/1.1 servers do not have this feature enabled, so that when you attempt to get a range, curl instead gets the whole document. FTP and SFTP range downloads only support the simple 'start-stop' syntax (optionally with one of the numbers omitted). FTP use depends on the extended FTP command SIZE. If -r, --range is provided several times, the last set value is used. Example: curl --range 22-44 https://example.com See also -C, --continue-at and -a, --append. --rate <max request rate> Specify the maximum transfer frequency you allow curl to use - in number of transfer starts per time unit (sometimes called request rate). Without this option, curl starts the next transfer as fast as possible. If given several URLs and a transfer completes faster than the allowed rate, curl waits until the next transfer is started to maintain the requested rate. This option has no effect when -Z, --parallel is used. The request rate is provided as "N/U" where N is an integer number and U is a time unit. Supported units are 's' (second), 'm' (minute), 'h' (hour) and 'd' /(day, as in a 24 hour unit). The default time unit, if no "/U" is provided, is number of transfers per hour. If curl is told to allow 10 requests per minute, it does not start the next request until 6 seconds have elapsed since the previous transfer was started. This function uses millisecond resolution. If the allowed frequency is set more than 1000 per second, it instead runs unrestricted. When retrying transfers, enabled with --retry, the separate retry delay logic is used and not this setting. This option is global and does not need to be specified for each use of --next. If --rate is provided several times, the last set value is used. Examples: curl --rate 2/s https://example.com ... curl --rate 3/h https://example.com ... curl --rate 14/m https://example.com ... See also --limit-rate and --retry-delay. Added in 7.84.0. --raw (HTTP) When used, it disables all internal HTTP decoding of content or transfer encodings and instead makes them passed on unaltered, raw. Providing --raw multiple times has no extra effect. Disable it again with --no-raw. Example: curl --raw https://example.com See also --tr-encoding. -e, --referer <URL> (HTTP) Sends the "Referrer Page" information to the HTTP server. This can also be set with the -H, --header flag of course. When used with -L, --location you can append ";auto" to the -e, --referer URL to make curl automatically set the previous URL when it follows a Location: header. The ";auto" string can be used alone, even if you do not set an initial -e, --referer. If -e, --referer is provided several times, the last set value is used. Examples: curl --referer "https://fake.example" https://example.com curl --referer "https://fake.example;auto" -L https://example.com curl --referer ";auto" -L https://example.com See also -A, --user-agent and -H, --header. -J, --remote-header-name (HTTP) This option tells the -O, --remote-name option to use the server-specified Content-Disposition filename instead of extracting a filename from the URL. If the server-provided file name contains a path, that is stripped off before the file name is used. The file is saved in the current directory, or in the directory specified with --output-dir. If the server specifies a file name and a file with that name already exists in the destination directory, it is not overwritten and an error occurs - unless you allow it by using the --clobber option. If the server does not specify a file name then this option has no effect. There is no attempt to decode %-sequences (yet) in the provided file name, so this option may provide you with rather unexpected file names. This feature uses the name from the "filename" field, it does not yet support the "filename*" field (filenames with explicit character sets). WARNING: Exercise judicious use of this option, especially on Windows. A rogue server could send you the name of a DLL or other file that could be loaded automatically by Windows or some third party software. Providing -J, --remote-header-name multiple times has no extra effect. Disable it again with --no-remote-header-name. Example: curl -OJ https://example.com/file See also -O, --remote-name. -O, --remote-name Write output to a local file named like the remote file we get. (Only the file part of the remote file is used, the path is cut off.) The file is saved in the current working directory. If you want the file saved in a different directory, make sure you change the current working directory before invoking curl with this option or use --output-dir. The remote file name to use for saving is extracted from the given URL, nothing else, and if it already exists it is overwritten. If you want the server to be able to choose the file name refer to -J, --remote-header-name which can be used in addition to this option. If the server chooses a file name and that name already exists it is not overwritten. There is no URL decoding done on the file name. If it has %20 or other URL encoded parts of the name, they end up as-is as file name. You may use this option as many times as the number of URLs you have. -O, --remote-name can be used several times in a command line Example: curl -O https://example.com/filename See also --remote-name-all, --output-dir and -J, --remote-header-name. --remote-name-all This option changes the default action for all given URLs to be dealt with as if -O, --remote-name were used for each one. So if you want to disable that for a specific URL after --remote-name-all has been used, you must use "-o -" or --no-remote-name. Providing --remote-name-all multiple times has no extra effect. Disable it again with --no-remote-name-all. Example: curl --remote-name-all ftp://example.com/file1 ftp://example.com/file2 See also -O, --remote-name. -R, --remote-time Makes curl attempt to figure out the timestamp of the remote file that is getting downloaded, and if that is available make the local file get that same timestamp. Providing -R, --remote-time multiple times has no extra effect. Disable it again with --no-remote-time. Example: curl --remote-time -o foo https://example.com See also -O, --remote-name and -z, --time-cond. --remove-on-error When curl returns an error when told to save output in a local file, this option removes that saved file before exiting. This prevents curl from leaving a partial file in the case of an error during transfer. If the output is not a file, this option has no effect. Providing --remove-on-error multiple times has no extra effect. Disable it again with --no-remove-on-error. Example: curl --remove-on-error -o output https://example.com See also -f, --fail. Added in 7.83.0. -X, --request <method> Change the method to use when starting the transfer. curl passes on the verbatim string you give it its the request without any filter or other safe guards. That includes white space and control characters. HTTP Specifies a custom request method to use when communicating with the HTTP server. The specified request method is used instead of the method otherwise used (which defaults to GET). Read the HTTP 1.1 specification for details and explanations. Common additional HTTP requests include PUT and DELETE, but related technologies like WebDAV offers PROPFIND, COPY, MOVE and more. Normally you do not need this option. All sorts of GET, HEAD, POST and PUT requests are rather invoked by using dedicated command line options. This option only changes the actual word used in the HTTP request, it does not alter the way curl behaves. So for example if you want to make a proper HEAD request, using -X HEAD does not suffice. You need to use the -I, --head option. The method string you set with -X, --request is used for all requests, which if you for example use -L, --location may cause unintended side-effects when curl does not change request method according to the HTTP 30x response codes - and similar. FTP Specifies a custom FTP command to use instead of LIST when doing file lists with FTP. POP3 Specifies a custom POP3 command to use instead of LIST or RETR. IMAP Specifies a custom IMAP command to use instead of LIST. SMTP Specifies a custom SMTP command to use instead of HELP or VRFY. If -X, --request is provided several times, the last set value is used. Examples: curl -X "DELETE" https://example.com curl -X NLST ftp://example.com/ See also --request-target. --request-target <path> (HTTP) Tells curl to use an alternative "target" (path) instead of using the path as provided in the URL. Particularly useful when wanting to issue HTTP requests without leading slash or other data that does not follow the regular URL pattern, like "OPTIONS *". curl passes on the verbatim string you give it its the request without any filter or other safe guards. That includes white space and control characters. If --request-target is provided several times, the last set value is used. Example: curl --request-target "*" -X OPTIONS https://example.com See also -X, --request. Added in 7.55.0. --resolve <[+]host:port:addr[,addr]...> Provide a custom address for a specific host and port pair. Using this, you can make the curl requests(s) use a specified address and prevent the otherwise normally resolved address to be used. Consider it a sort of /etc/hosts alternative provided on the command line. The port number should be the number used for the specific protocol the host is used for. It means you need several entries if you want to provide address for the same host but different ports. By specifying '*' as host you can tell curl to resolve any host and specific port pair to the specified address. Wildcard is resolved last so any --resolve with a specific host and port is used first. The provided address set by this option is used even if -4, --ipv4 or -6, --ipv6 is set to make curl use another IP version. By prefixing the host with a '+' you can make the entry time out after curl's default timeout (1 minute). Note that this only makes sense for long running parallel transfers with a lot of files. In such cases, if this option is used curl tries to resolve the host as it normally would once the timeout has expired. Support for providing the IP address within [brackets] was added in 7.57.0. Support for providing multiple IP addresses per entry was added in 7.59.0. Support for resolving with wildcard was added in 7.64.0. Support for the '+' prefix was was added in 7.75.0. --resolve can be used several times in a command line Example: curl --resolve example.com:443:127.0.0.1 https://example.com See also --connect-to and --alt-svc. --retry <num> If a transient error is returned when curl tries to perform a transfer, it retries this number of times before giving up. Setting the number to 0 makes curl do no retries (which is the default). Transient error means either: a timeout, an FTP 4xx response code or an HTTP 408, 429, 500, 502, 503 or 504 response code. When curl is about to retry a transfer, it first waits one second and then for all forthcoming retries it doubles the waiting time until it reaches 10 minutes which then remains delay between the rest of the retries. By using --retry-delay you disable this exponential backoff algorithm. See also --retry-max-time to limit the total time allowed for retries. curl complies with the Retry-After: response header if one was present to know when to issue the next retry (added in 7.66.0). If --retry is provided several times, the last set value is used. Example: curl --retry 7 https://example.com See also --retry-max-time. --retry-all-errors Retry on any error. This option is used together with --retry. This option is the "sledgehammer" of retrying. Do not use this option by default (for example in your curlrc), there may be unintended consequences such as sending or receiving duplicate data. Do not use with redirected input or output. You'd be much better off handling your unique problems in shell script. Please read the example below. WARNING: For server compatibility curl attempts to retry failed flaky transfers as close as possible to how they were started, but this is not possible with redirected input or output. For example, before retrying it removes output data from a failed partial transfer that was written to an output file. However this is not true of data redirected to a | pipe or > file, which are not reset. We strongly suggest you do not parse or record output via redirect in combination with this option, since you may receive duplicate data. By default curl does not return error for transfers with an HTTP response code that indicates an HTTP error, if the transfer was successful. For example, if a server replies 404 Not Found and the reply is fully received then that is not an error. When --retry is used then curl retries on some HTTP response codes that indicate transient HTTP errors, but that does not include most 4xx response codes such as 404. If you want to retry on all response codes that indicate HTTP errors (4xx and 5xx) then combine with -f, --fail. Providing --retry-all-errors multiple times has no extra effect. Disable it again with --no-retry-all-errors. Example: curl --retry 5 --retry-all-errors https://example.com See also --retry. Added in 7.71.0. --retry-connrefused In addition to the other conditions, consider ECONNREFUSED as a transient error too for --retry. This option is used together with --retry. Providing --retry-connrefused multiple times has no extra effect. Disable it again with --no-retry-connrefused. Example: curl --retry-connrefused --retry 7 https://example.com See also --retry and --retry-all-errors. Added in 7.52.0. --retry-delay <seconds> Make curl sleep this amount of time before each retry when a transfer has failed with a transient error (it changes the default backoff time algorithm between retries). This option is only interesting if --retry is also used. Setting this delay to zero makes curl use the default backoff time. If --retry-delay is provided several times, the last set value is used. Example: curl --retry-delay 5 --retry 7 https://example.com See also --retry. --retry-max-time <seconds> The retry timer is reset before the first transfer attempt. Retries are done as usual (see --retry) as long as the timer has not reached this given limit. Notice that if the timer has not reached the limit, the request is made and while performing, it may take longer than this given time period. To limit a single request's maximum time, use -m, --max-time. Set this option to zero to not timeout retries. If --retry-max-time is provided several times, the last set value is used. Example: curl --retry-max-time 30 --retry 10 https://example.com See also --retry. --sasl-authzid <identity> Use this authorization identity (authzid), during SASL PLAIN authentication, in addition to the authentication identity (authcid) as specified by -u, --user. If the option is not specified, the server derives the authzid from the authcid, but if specified, and depending on the server implementation, it may be used to access another user's inbox, that the user has been granted access to, or a shared mailbox for example. If --sasl-authzid is provided several times, the last set value is used. Example: curl --sasl-authzid zid imap://example.com/ See also --login-options. Added in 7.66.0. --sasl-ir Enable initial response in SASL authentication. Providing --sasl-ir multiple times has no extra effect. Disable it again with --no-sasl-ir. Example: curl --sasl-ir imap://example.com/ See also --sasl-authzid. --service-name <name> This option allows you to change the service name for SPNEGO. If --service-name is provided several times, the last set value is used. Example: curl --service-name sockd/server https://example.com See also --negotiate and --proxy-service-name. -S, --show-error When used with -s, --silent, it makes curl show an error message if it fails. This option is global and does not need to be specified for each use of --next. Providing -S, --show-error multiple times has no extra effect. Disable it again with --no-show-error. Example: curl --show-error --silent https://example.com See also --no-progress-meter. -s, --silent Silent or quiet mode. Do not show progress meter or error messages. Makes Curl mute. It still outputs the data you ask for, potentially even to the terminal/stdout unless you redirect it. Use -S, --show-error in addition to this option to disable progress meter but still show error messages. Providing -s, --silent multiple times has no extra effect. Disable it again with --no-silent. Example: curl -s https://example.com See also -v, --verbose, --stderr and --no-progress-meter. --socks4 <host[:port]> Use the specified SOCKS4 proxy. If the port number is not specified, it is assumed at port 1080. Using this socket type make curl resolve the host name and passing the address on to the proxy. To specify proxy on a unix domain socket, use localhost for host, e.g. socks4://localhost/path/to/socket.sock This option overrides any previous use of -x, --proxy, as they are mutually exclusive. This option is superfluous since you can specify a socks4 proxy with -x, --proxy using a socks4:// protocol prefix. --preproxy can be used to specify a SOCKS proxy at the same time proxy is used with an HTTP/HTTPS proxy (added in 7.52.0). In such a case, curl first connects to the SOCKS proxy and then connects (through SOCKS) to the HTTP or HTTPS proxy. If --socks4 is provided several times, the last set value is used. Example: curl --socks4 hostname:4096 https://example.com See also --socks4a, --socks5 and --socks5-hostname. --socks4a <host[:port]> Use the specified SOCKS4a proxy. If the port number is not specified, it is assumed at port 1080. This asks the proxy to resolve the host name. To specify proxy on a unix domain socket, use localhost for host, e.g. socks4a://localhost/path/to/socket.sock This option overrides any previous use of -x, --proxy, as they are mutually exclusive. This option is superfluous since you can specify a socks4a proxy with -x, --proxy using a socks4a:// protocol prefix. --preproxy can be used to specify a SOCKS proxy at the same time -x, --proxy is used with an HTTP/HTTPS proxy (added in 7.52.0). In such a case, curl first connects to the SOCKS proxy and then connects (through SOCKS) to the HTTP or HTTPS proxy. If --socks4a is provided several times, the last set value is used. Example: curl --socks4a hostname:4096 https://example.com See also --socks4, --socks5 and --socks5-hostname. --socks5 <host[:port]> Use the specified SOCKS5 proxy - but resolve the host name locally. If the port number is not specified, it is assumed at port 1080. To specify proxy on a unix domain socket, use localhost for host, e.g. socks5://localhost/path/to/socket.sock This option overrides any previous use of -x, --proxy, as they are mutually exclusive. This option is superfluous since you can specify a socks5 proxy with -x, --proxy using a socks5:// protocol prefix. --preproxy can be used to specify a SOCKS proxy at the same time -x, --proxy is used with an HTTP/HTTPS proxy (added in 7.52.0). In such a case, curl first connects to the SOCKS proxy and then connects (through SOCKS) to the HTTP or HTTPS proxy. This option (as well as --socks4) does not work with IPV6, FTPS or LDAP. If --socks5 is provided several times, the last set value is used. Example: curl --socks5 proxy.example:7000 https://example.com See also --socks5-hostname and --socks4a. --socks5-basic Tells curl to use username/password authentication when connecting to a SOCKS5 proxy. The username/password authentication is enabled by default. Use --socks5-gssapi to force GSS-API authentication to SOCKS5 proxies. Providing --socks5-basic multiple times has no extra effect. Example: curl --socks5-basic --socks5 hostname:4096 https://example.com See also --socks5. Added in 7.55.0. --socks5-gssapi Tells curl to use GSS-API authentication when connecting to a SOCKS5 proxy. The GSS-API authentication is enabled by default (if curl is compiled with GSS-API support). Use --socks5-basic to force username/password authentication to SOCKS5 proxies. Providing --socks5-gssapi multiple times has no extra effect. Disable it again with --no-socks5-gssapi. Example: curl --socks5-gssapi --socks5 hostname:4096 https://example.com See also --socks5. Added in 7.55.0. --socks5-gssapi-nec As part of the GSS-API negotiation a protection mode is negotiated. RFC 1961 says in section 4.3/4.4 it should be protected, but the NEC reference implementation does not. The option --socks5-gssapi-nec allows the unprotected exchange of the protection mode negotiation. Providing --socks5-gssapi-nec multiple times has no extra effect. Disable it again with --no-socks5-gssapi-nec. Example: curl --socks5-gssapi-nec --socks5 hostname:4096 https://example.com See also --socks5. --socks5-gssapi-service <name> The default service name for a socks server is rcmd/server-fqdn. This option allows you to change it. If --socks5-gssapi-service is provided several times, the last set value is used. Example: curl --socks5-gssapi-service sockd --socks5 hostname:4096 https://example.com See also --socks5. --socks5-hostname <host[:port]> Use the specified SOCKS5 proxy (and let the proxy resolve the host name). If the port number is not specified, it is assumed at port 1080. To specify proxy on a unix domain socket, use localhost for host, e.g. socks5h://localhost/path/to/socket.sock This option overrides any previous use of -x, --proxy, as they are mutually exclusive. This option is superfluous since you can specify a socks5 hostname proxy with -x, --proxy using a socks5h:// protocol prefix. --preproxy can be used to specify a SOCKS proxy at the same time -x, --proxy is used with an HTTP/HTTPS proxy (added in 7.52.0). In such a case, curl first connects to the SOCKS proxy and then connects (through SOCKS) to the HTTP or HTTPS proxy. If --socks5-hostname is provided several times, the last set value is used. Example: curl --socks5-hostname proxy.example:7000 https://example.com See also --socks5 and --socks4a. -Y, --speed-limit <speed> If a transfer is slower than this given speed (in bytes per second) for speed-time seconds it gets aborted. speed-time is set with -y, --speed-time and is 30 if not set. If -Y, --speed-limit is provided several times, the last set value is used. Example: curl --speed-limit 300 --speed-time 10 https://example.com See also -y, --speed-time, --limit-rate and -m, --max-time. -y, --speed-time <seconds> If a transfer runs slower than speed-limit bytes per second during a speed-time period, the transfer is aborted. If speed-time is used, the default speed-limit is 1 unless set with -Y, --speed-limit. This option controls transfers (in both directions) but does not affect slow connects etc. If this is a concern for you, try the --connect-timeout option. If -y, --speed-time is provided several times, the last set value is used. Example: curl --speed-limit 300 --speed-time 10 https://example.com See also -Y, --speed-limit and --limit-rate. --ssl (FTP IMAP POP3 SMTP LDAP) Warning: this is considered an insecure option. Consider using --ssl-reqd instead to be sure curl upgrades to a secure connection. Try to use SSL/TLS for the connection. Reverts to a non-secure connection if the server does not support SSL/TLS. See also --ftp-ssl-control and --ssl-reqd for different levels of encryption required. This option is handled in LDAP (added in 7.81.0). It is fully supported by the OpenLDAP backend and ignored by the generic ldap backend. Please note that a server may close the connection if the negotiation does not succeed. This option was formerly known as --ftp-ssl. That option name can still be used but might be removed in a future version. Providing --ssl multiple times has no extra effect. Disable it again with --no-ssl. Example: curl --ssl pop3://example.com/ See also --ssl-reqd, -k, --insecure and --ciphers. --ssl-allow-beast (TLS) This option tells curl to not work around a security flaw in the SSL3 and TLS1.0 protocols known as BEAST. If this option is not used, the SSL layer may use workarounds known to cause interoperability problems with some older SSL implementations. WARNING: this option loosens the SSL security, and by using this flag you ask for exactly that. Providing --ssl-allow-beast multiple times has no extra effect. Disable it again with --no-ssl-allow-beast. Example: curl --ssl-allow-beast https://example.com See also --proxy-ssl-allow-beast and -k, --insecure. --ssl-auto-client-cert (TLS) (Schannel) Tell libcurl to automatically locate and use a client certificate for authentication, when requested by the server. Since the server can request any certificate that supports client authentication in the OS certificate store it could be a privacy violation and unexpected. Providing --ssl-auto-client-cert multiple times has no extra effect. Disable it again with --no-ssl-auto-client-cert. Example: curl --ssl-auto-client-cert https://example.com See also --proxy-ssl-auto-client-cert. Added in 7.77.0. --ssl-no-revoke (TLS) (Schannel) This option tells curl to disable certificate revocation checks. WARNING: this option loosens the SSL security, and by using this flag you ask for exactly that. Providing --ssl-no-revoke multiple times has no extra effect. Disable it again with --no-ssl-no-revoke. Example: curl --ssl-no-revoke https://example.com See also --crlfile. --ssl-reqd (FTP IMAP POP3 SMTP LDAP) Require SSL/TLS for the connection. Terminates the connection if the transfer cannot be upgraded to use SSL/TLS. This option is handled in LDAP (added in 7.81.0). It is fully supported by the OpenLDAP backend and rejected by the generic ldap backend if explicit TLS is required. This option is unnecessary if you use a URL scheme that in itself implies immediate and implicit use of TLS, like for FTPS, IMAPS, POP3S, SMTPS and LDAPS. Such a transfer always fails if the TLS handshake does not work. This option was formerly known as --ftp-ssl-reqd. Providing --ssl-reqd multiple times has no extra effect. Disable it again with --no-ssl-reqd. Example: curl --ssl-reqd ftp://example.com See also --ssl and -k, --insecure. --ssl-revoke-best-effort (TLS) (Schannel) This option tells curl to ignore certificate revocation checks when they failed due to missing/offline distribution points for the revocation check lists. Providing --ssl-revoke-best-effort multiple times has no extra effect. Disable it again with --no-ssl-revoke-best-effort. Example: curl --ssl-revoke-best-effort https://example.com See also --crlfile and -k, --insecure. Added in 7.70.0. -2, --sslv2 (SSL) This option previously asked curl to use SSLv2, but is now ignored (added in 7.77.0). SSLv2 is widely considered insecure (see RFC 6176). Providing -2, --sslv2 multiple times has no extra effect. Example: curl --sslv2 https://example.com See also --http1.1 and --http2. -2, --sslv2 requires that the underlying libcurl was built to support TLS. This option is mutually exclusive to -3, --sslv3 and -1, --tlsv1 and --tlsv1.1 and --tlsv1.2. -3, --sslv3 (SSL) This option previously asked curl to use SSLv3, but is now ignored (added in 7.77.0). SSLv3 is widely considered insecure (see RFC 7568). Providing -3, --sslv3 multiple times has no extra effect. Example: curl --sslv3 https://example.com See also --http1.1 and --http2. -3, --sslv3 requires that the underlying libcurl was built to support TLS. This option is mutually exclusive to -2, --sslv2 and -1, --tlsv1 and --tlsv1.1 and --tlsv1.2. --stderr <file> Redirect all writes to stderr to the specified file instead. If the file name is a plain '-', it is instead written to stdout. This option is global and does not need to be specified for each use of --next. If --stderr is provided several times, the last set value is used. Example: curl --stderr output.txt https://example.com See also -v, --verbose and -s, --silent. --styled-output Enables the automatic use of bold font styles when writing HTTP headers to the terminal. Use --no-styled-output to switch them off. Styled output requires a terminal that supports bold fonts. This feature is not present on curl for Windows due to lack of this capability. This option is global and does not need to be specified for each use of --next. Providing --styled-output multiple times has no extra effect. Disable it again with --no-styled-output. Example: curl --styled-output -I https://example.com See also -I, --head and -v, --verbose. Added in 7.61.0. --suppress-connect-headers When -p, --proxytunnel is used and a CONNECT request is made do not output proxy CONNECT response headers. This option is meant to be used with -D, --dump-header or -i, --include which are used to show protocol headers in the output. It has no effect on debug options such as -v, --verbose or --trace, or any statistics. Providing --suppress-connect-headers multiple times has no extra effect. Disable it again with --no-suppress-connect-headers. Example: curl --suppress-connect-headers --include -x proxy https://example.com See also -D, --dump-header, -i, --include and -p, --proxytunnel. Added in 7.54.0. --tcp-fastopen Enable use of TCP Fast Open (RFC 7413). TCP Fast Open is a TCP extension that allows data to get sent earlier over the connection (before the final handshake ACK) if the client and server have been connected previously. Providing --tcp-fastopen multiple times has no extra effect. Disable it again with --no-tcp-fastopen. Example: curl --tcp-fastopen https://example.com See also --false-start. --tcp-nodelay Turn on the TCP_NODELAY option. See the curl_easy_setopt(3) man page for details about this option. curl sets this option by default and you need to explicitly switch it off if you do not want it on (added in 7.50.2). Providing --tcp-nodelay multiple times has no extra effect. Disable it again with --no-tcp-nodelay. Example: curl --tcp-nodelay https://example.com See also -N, --no-buffer. -t, --telnet-option <opt=val> Pass options to the telnet protocol. Supported options are: TTYPE=<term> Sets the terminal type. XDISPLOC=<X display> Sets the X display location. NEW_ENV=<var,val> Sets an environment variable. -t, --telnet-option can be used several times in a command line Example: curl -t TTYPE=vt100 telnet://example.com/ See also -K, --config. --tftp-blksize <value> (TFTP) Set the TFTP BLKSIZE option (must be >512). This is the block size that curl tries to use when transferring data to or from a TFTP server. By default 512 bytes are used. If --tftp-blksize is provided several times, the last set value is used. Example: curl --tftp-blksize 1024 tftp://example.com/file See also --tftp-no-options. --tftp-no-options (TFTP) Tells curl not to send TFTP options requests. This option improves interop with some legacy servers that do not acknowledge or properly implement TFTP options. When this option is used --tftp-blksize is ignored. Providing --tftp-no-options multiple times has no extra effect. Disable it again with --no-tftp-no-options. Example: curl --tftp-no-options tftp://192.168.0.1/ See also --tftp-blksize. -z, --time-cond <time> (HTTP FTP) Request a file that has been modified later than the given time and date, or one that has been modified before that time. The <date expression> can be all sorts of date strings or if it does not match any internal ones, it is taken as a filename and tries to get the modification date (mtime) from <file> instead. See the curl_getdate(3) man pages for date expression details. Start the date expression with a dash (-) to make it request for a document that is older than the given date/time, default is a document that is newer than the specified date/time. If provided a non-existing file, curl outputs a warning about that fact and proceeds to do the transfer without a time condition. If -z, --time-cond is provided several times, the last set value is used. Examples: curl -z "Wed 01 Sep 2021 12:18:00" https://example.com curl -z "-Wed 01 Sep 2021 12:18:00" https://example.com curl -z file https://example.com See also --etag-compare and -R, --remote-time. --tls-max <VERSION> (TLS) VERSION defines maximum supported TLS version. The minimum acceptable version is set by tlsv1.0, tlsv1.1, tlsv1.2 or tlsv1.3. If the connection is done without TLS, this option has no effect. This includes QUIC-using (HTTP/3) transfers. default Use up to recommended TLS version. 1.0 Use up to TLSv1.0. 1.1 Use up to TLSv1.1. 1.2 Use up to TLSv1.2. 1.3 Use up to TLSv1.3. If --tls-max is provided several times, the last set value is used. Examples: curl --tls-max 1.2 https://example.com curl --tls-max 1.3 --tlsv1.2 https://example.com See also --tlsv1.0, --tlsv1.1, --tlsv1.2 and --tlsv1.3. --tls-max requires that the underlying libcurl was built to support TLS. Added in 7.54.0. --tls13-ciphers <ciphersuite list> (TLS) Specifies which cipher suites to use in the connection if it negotiates TLS 1.3. The list of ciphers suites must specify valid ciphers. Read up on TLS 1.3 cipher suite details on this URL: https://curl.se/docs/ssl-ciphers.html This option is currently used only when curl is built to use OpenSSL 1.1.1 or later, or Schannel. If you are using a different SSL backend you can try setting TLS 1.3 cipher suites by using the --ciphers option. If --tls13-ciphers is provided several times, the last set value is used. Example: curl --tls13-ciphers TLS_AES_128_GCM_SHA256 https://example.com See also --ciphers, --curves and --proxy-tls13-ciphers. Added in 7.61.0. --tlsauthtype <type> (TLS) Set TLS authentication type. Currently, the only supported option is "SRP", for TLS-SRP (RFC 5054). If --tlsuser and --tlspassword are specified but --tlsauthtype is not, then this option defaults to "SRP". This option works only if the underlying libcurl is built with TLS-SRP support, which requires OpenSSL or GnuTLS with TLS-SRP support. If --tlsauthtype is provided several times, the last set value is used. Example: curl --tlsauthtype SRP https://example.com See also --tlsuser. --tlspassword <string> (TLS) Set password for use with the TLS authentication method specified with --tlsauthtype. Requires that --tlsuser also be set. This option does not work with TLS 1.3. If --tlspassword is provided several times, the last set value is used. Example: curl --tlspassword pwd --tlsuser user https://example.com See also --tlsuser. --tlsuser <name> (TLS) Set username for use with the TLS authentication method specified with --tlsauthtype. Requires that --tlspassword also is set. This option does not work with TLS 1.3. If --tlsuser is provided several times, the last set value is used. Example: curl --tlspassword pwd --tlsuser user https://example.com See also --tlspassword. -1, --tlsv1 (TLS) Tells curl to use at least TLS version 1.x when negotiating with a remote TLS server. That means TLS version 1.0 or higher Providing -1, --tlsv1 multiple times has no extra effect. Example: curl --tlsv1 https://example.com See also --http1.1 and --http2. -1, --tlsv1 requires that the underlying libcurl was built to support TLS. This option is mutually exclusive to --tlsv1.1 and --tlsv1.2 and --tlsv1.3. --tlsv1.0 (TLS) Forces curl to use TLS version 1.0 or later when connecting to a remote TLS server. In old versions of curl this option was documented to allow _only_ TLS 1.0. That behavior was inconsistent depending on the TLS library. Use --tls-max if you want to set a maximum TLS version. Providing --tlsv1.0 multiple times has no extra effect. Example: curl --tlsv1.0 https://example.com See also --tlsv1.3. --tlsv1.1 (TLS) Forces curl to use TLS version 1.1 or later when connecting to a remote TLS server. In old versions of curl this option was documented to allow _only_ TLS 1.1. That behavior was inconsistent depending on the TLS library. Use --tls-max if you want to set a maximum TLS version. Providing --tlsv1.1 multiple times has no extra effect. Example: curl --tlsv1.1 https://example.com See also --tlsv1.3 and --tls-max. --tlsv1.2 (TLS) Forces curl to use TLS version 1.2 or later when connecting to a remote TLS server. In old versions of curl this option was documented to allow _only_ TLS 1.2. That behavior was inconsistent depending on the TLS library. Use --tls-max if you want to set a maximum TLS version. Providing --tlsv1.2 multiple times has no extra effect. Example: curl --tlsv1.2 https://example.com See also --tlsv1.3 and --tls-max. --tlsv1.3 (TLS) Forces curl to use TLS version 1.3 or later when connecting to a remote TLS server. If the connection is done without TLS, this option has no effect. This includes QUIC-using (HTTP/3) transfers. Note that TLS 1.3 is not supported by all TLS backends. Providing --tlsv1.3 multiple times has no extra effect. Example: curl --tlsv1.3 https://example.com See also --tlsv1.2 and --tls-max. Added in 7.52.0. --tr-encoding (HTTP) Request a compressed Transfer-Encoding response using one of the algorithms curl supports, and uncompress the data while receiving it. Providing --tr-encoding multiple times has no extra effect. Disable it again with --no-tr-encoding. Example: curl --tr-encoding https://example.com See also --compressed. --trace <file> Enables a full trace dump of all incoming and outgoing data, including descriptive information, to the given output file. Use "-" as filename to have the output sent to stdout. Use "%" as filename to have the output sent to stderr. Note that verbose output of curl activities and network traffic might contain sensitive data, including user names, credentials or secret data content. Be aware and be careful when sharing trace logs with others. This option is global and does not need to be specified for each use of --next. If --trace is provided several times, the last set value is used. Example: curl --trace log.txt https://example.com See also --trace-ascii, --trace-config, --trace-ids and --trace-time. This option is mutually exclusive to -v, --verbose and --trace-ascii. --trace-ascii <file> Enables a full trace dump of all incoming and outgoing data, including descriptive information, to the given output file. Use "-" as filename to have the output sent to stdout. This is similar to --trace, but leaves out the hex part and only shows the ASCII part of the dump. It makes smaller output that might be easier to read for untrained humans. Note that verbose output of curl activities and network traffic might contain sensitive data, including user names, credentials or secret data content. Be aware and be careful when sharing trace logs with others. This option is global and does not need to be specified for each use of --next. If --trace-ascii is provided several times, the last set value is used. Example: curl --trace-ascii log.txt https://example.com See also -v, --verbose and --trace. This option is mutually exclusive to --trace and -v, --verbose. --trace-config <string> Set configuration for trace output. A comma-separated list of components where detailed output can be made available from. Names are case-insensitive. Specify 'all' to enable all trace components. In addition to trace component names, specify "ids" and "time" to avoid extra --trace-ids or --trace-time parameters. See the curl_global_trace(3) man page for more details. This option is global and does not need to be specified for each use of --next. --trace-config can be used several times in a command line Example: curl --trace-config ids,http/2 https://example.com See also -v, --verbose and --trace. This option is mutually exclusive to --trace and -v, --verbose. Added in 8.3.0. --trace-ids Prepends the transfer and connection identifiers to each trace or verbose line that curl displays. This option is global and does not need to be specified for each use of --next. Providing --trace-ids multiple times has no extra effect. Disable it again with --no-trace-ids. Example: curl --trace-ids --trace-ascii output https://example.com See also --trace and -v, --verbose. Added in 8.2.0. --trace-time Prepends a time stamp to each trace or verbose line that curl displays. This option is global and does not need to be specified for each use of --next. Providing --trace-time multiple times has no extra effect. Disable it again with --no-trace-time. Example: curl --trace-time --trace-ascii output https://example.com See also --trace and -v, --verbose. --unix-socket <path> (HTTP) Connect through this Unix domain socket, instead of using the network. If --unix-socket is provided several times, the last set value is used. Example: curl --unix-socket socket-path https://example.com See also --abstract-unix-socket. -T, --upload-file <file> This transfers the specified local file to the remote URL. If there is no file part in the specified URL, curl appends the local file name to the end of the URL before the operation starts. You must use a trailing slash (/) on the last directory to prove to curl that there is no file name or curl thinks that your last directory name is the remote file name to use. When putting the local file name at the end of the URL, curl ignores what is on the left side of any slash (/) or backslash (\) used in the file name and only appends what is on the right side of the rightmost such character. Use the file name "-" (a single dash) to use stdin instead of a given file. Alternately, the file name "." (a single period) may be specified instead of "-" to use stdin in non-blocking mode to allow reading server output while stdin is being uploaded. If this option is used with a HTTP(S) URL, the PUT method is used. You can specify one -T, --upload-file for each URL on the command line. Each -T, --upload-file + URL pair specifies what to upload and to where. curl also supports "globbing" of the -T, --upload-file argument, meaning that you can upload multiple files to a single URL by using the same URL globbing style supported in the URL. When uploading to an SMTP server: the uploaded data is assumed to be RFC 5322 formatted. It has to feature the necessary set of headers and mail body formatted correctly by the user as curl does not transcode nor encode it further in any way. -T, --upload-file can be used several times in a command line Examples: curl -T file https://example.com curl -T "img[1-1000].png" ftp://ftp.example.com/ curl --upload-file "{file1,file2}" https://example.com See also -G, --get, -I, --head, -X, --request and -d, --data. --url <url> Specify a URL to fetch. This option is mostly handy when you want to specify URL(s) in a config file. If the given URL is missing a scheme name (such as "http://" or "ftp://" etc) then curl makes a guess based on the host. If the outermost subdomain name matches DICT, FTP, IMAP, LDAP, POP3 or SMTP then that protocol is used, otherwise HTTP is used. Guessing can be avoided by providing a full URL including the scheme, or disabled by setting a default protocol (added in 7.45.0), see --proto-default for details. To control where this URL is written, use the -o, --output or the -O, --remote-name options. WARNING: On Windows, particular file:// accesses can be converted to network accesses by the operating system. Beware! --url can be used several times in a command line Example: curl --url https://example.com See also -:, --next and -K, --config. --url-query <data> (all) This option adds a piece of data, usually a name + value pair, to the end of the URL query part. The syntax is identical to that used for --data-urlencode with one extension: If the argument starts with a '+' (plus), the rest of the string is provided as-is unencoded. The query part of a URL is the one following the question mark on the right end. --url-query can be used several times in a command line Examples: curl --url-query name=val https://example.com curl --url-query =encodethis http://example.net/foo curl --url-query name@file https://example.com curl --url-query @fileonly https://example.com curl --url-query "+name=%20foo" https://example.com See also --data-urlencode and -G, --get. Added in 7.87.0. -B, --use-ascii (FTP LDAP) Enable ASCII transfer. For FTP, this can also be enforced by using a URL that ends with ";type=A". This option causes data sent to stdout to be in text mode for win32 systems. Providing -B, --use-ascii multiple times has no extra effect. Disable it again with --no-use-ascii. Example: curl -B ftp://example.com/README See also --crlf and --data-ascii. -u, --user <user:password> Specify the user name and password to use for server authentication. Overrides -n, --netrc and --netrc-optional. If you simply specify the user name, curl prompts for a password. The user name and passwords are split up on the first colon, which makes it impossible to use a colon in the user name with this option. The password can, still. On systems where it works, curl hides the given option argument from process listings. This is not enough to protect credentials from possibly getting seen by other users on the same system as they still are visible for a brief moment before cleared. Such sensitive data should be retrieved from a file instead or similar and never used in clear text in a command line. When using Kerberos V5 with a Windows based server you should include the Windows domain name in the user name, in order for the server to successfully obtain a Kerberos Ticket. If you do not, then the initial authentication handshake may fail. When using NTLM, the user name can be specified simply as the user name, without the domain, if there is a single domain and forest in your setup for example. To specify the domain name use either Down-Level Logon Name or UPN (User Principal Name) formats. For example, EXAMPLE\user and user@example.com respectively. If you use a Windows SSPI-enabled curl binary and perform Kerberos V5, Negotiate, NTLM or Digest authentication then you can tell curl to select the user name and password from your environment by specifying a single colon with this option: "-u :". If -u, --user is provided several times, the last set value is used. Example: curl -u user:secret https://example.com See also -n, --netrc and -K, --config. -A, --user-agent <name> (HTTP) Specify the User-Agent string to send to the HTTP server. To encode blanks in the string, surround the string with single quote marks. This header can also be set with the -H, --header or the --proxy-header options. If you give an empty argument to -A, --user-agent (""), it removes the header completely from the request. If you prefer a blank header, you can set it to a single space (" "). If -A, --user-agent is provided several times, the last set value is used. Example: curl -A "Agent 007" https://example.com See also -H, --header and --proxy-header. --variable <[%]name=text/@file> Set a variable with "name=content" or "name@file" (where "file" can be stdin if set to a single dash (-)). The name is a case sensitive identifier that must consist of no other letters than a-z, A-Z, 0-9 or underscore. The specified content is then associated with this identifier. Setting the same variable name again overwrites the old contents with the new. The contents of a variable can be referenced in a later command line option when that option name is prefixed with "--expand-", and the name is used as "{{name}}" (without the quotes). --variable can import environment variables into the name space. Opt to either require the environment variable to be set or provide a default value for the variable in case it is not already set. --variable %name imports the variable called 'name' but exits with an error if that environment variable is not already set. To provide a default value if the environment variable is not set, use --variable %name=content or --variable %name@content. Note that on some systems - but not all - environment variables are case insensitive. When expanding variables, curl supports a set of functions that can make the variable contents more convenient to use. You apply a function to a variable expansion by adding a colon and then list the desired functions in a comma-separated list that is evaluated in a left-to-right order. Variable content holding null bytes that are not encoded when expanded, causes an error. Available functions: trim removes all leading and trailing white space. json outputs the content using JSON string quoting rules. url shows the content URL (percent) encoded. b64 expands the variable base64 encoded --variable can be used several times in a command line Example: curl --variable name=smith https://example.com See also -K, --config. Added in 8.3.0. -v, --verbose Makes curl verbose during the operation. Useful for debugging and seeing what's going on "under the hood". A line starting with '>' means "header data" sent by curl, '<' means "header data" received by curl that is hidden in normal cases, and a line starting with '*' means additional info provided by curl. If you only want HTTP headers in the output, -i, --include or -D, --dump-header might be more suitable options. If you think this option still does not give you enough details, consider using --trace or --trace-ascii instead. Note that verbose output of curl activities and network traffic might contain sensitive data, including user names, credentials or secret data content. Be aware and be careful when sharing trace logs with others. This option is global and does not need to be specified for each use of --next. Providing -v, --verbose multiple times has no extra effect. Disable it again with --no-verbose. Example: curl --verbose https://example.com See also -i, --include, -s, --silent, --trace and --trace-ascii. This option is mutually exclusive to --trace and --trace-ascii. -V, --version Displays information about curl and the libcurl version it uses. The first line includes the full version of curl, libcurl and other 3rd party libraries linked with the executable. The second line (starts with "Release-Date:") shows the release date. The third line (starts with "Protocols:") shows all protocols that libcurl reports to support. The fourth line (starts with "Features:") shows specific features libcurl reports to offer. Available features include: alt-svc Support for the Alt-Svc: header is provided. AsynchDNS This curl uses asynchronous name resolves. Asynchronous name resolves can be done using either the c-ares or the threaded resolver backends. brotli Support for automatic brotli compression over HTTP(S). CharConv curl was built with support for character set conversions (like EBCDIC) Debug This curl uses a libcurl built with Debug. This enables more error-tracking and memory debugging etc. For curl-developers only! gsasl The built-in SASL authentication includes extensions to support SCRAM because libcurl was built with libgsasl. GSS-API GSS-API is supported. HSTS HSTS support is present. HTTP2 HTTP/2 support has been built-in. HTTP3 HTTP/3 support has been built-in. HTTPS-proxy This curl is built to support HTTPS proxy. IDN This curl supports IDN - international domain names. IPv6 You can use IPv6 with this. Kerberos Kerberos V5 authentication is supported. Largefile This curl supports transfers of large files, files larger than 2GB. libz Automatic decompression (via gzip, deflate) of compressed files over HTTP is supported. MultiSSL This curl supports multiple TLS backends. NTLM NTLM authentication is supported. NTLM_WB NTLM delegation to winbind helper is supported. PSL PSL is short for Public Suffix List and means that this curl has been built with knowledge about "public suffixes". SPNEGO SPNEGO authentication is supported. SSL SSL versions of various protocols are supported, such as HTTPS, FTPS, POP3S and so on. SSPI SSPI is supported. TLS-SRP SRP (Secure Remote Password) authentication is supported for TLS. TrackMemory Debug memory tracking is supported. Unicode Unicode support on Windows. UnixSockets Unix sockets support is provided. zstd Automatic decompression (via zstd) of compressed files over HTTP is supported. Example: curl --version See also -h, --help and -M, --manual. -w, --write-out <format> Make curl display information on stdout after a completed transfer. The format is a string that may contain plain text mixed with any number of variables. The format can be specified as a literal "string", or you can have curl read the format from a file with "@filename" and to tell curl to read the format from stdin you write "@-". The variables present in the output format are substituted by the value or text that curl thinks fit, as described below. All variables are specified as %{variable_name} and to output a normal % you just write them as %%. You can output a newline by using \n, a carriage return with \r and a tab space with \t. The output is by default written to standard output, but can be changed with %{stderr} and %output{}. Output HTTP headers from the most recent request by using %header{name} where name is the case insensitive name of the header (without the trailing colon). The header contents are exactly as sent over the network, with leading and trailing whitespace trimmed (added in 7.84.0). Select a specific target destination file to write the output to, by using %output{name} (added in curl 8.3.0) where name is the full file name. The output following that instruction is then written to that file. More than one %output{} instruction can be specified in the same write-out argument. If the file name cannot be created, curl leaves the output destination to the one used prior to the %output{} instruction. Use %output{>>name} to append data to an existing file. NOTE: In Windows the %-symbol is a special symbol used to expand environment variables. In batch files all occurrences of % must be doubled when using this option to properly escape. If this option is used at the command prompt then the % cannot be escaped and unintended expansion is possible. The variables available are: certs Output the certificate chain with details. Supported only by the OpenSSL, GnuTLS, Schannel and Secure Transport backends. (Added in 7.88.0) content_type The Content-Type of the requested document, if there was any. errormsg The error message. (Added in 7.75.0) exitcode The numerical exit code of the transfer. (Added in 7.75.0) filename_effective The ultimate filename that curl writes out to. This is only meaningful if curl is told to write to a file with the -O, --remote-name or -o, --output option. It's most useful in combination with the -J, --remote-header-name option. ftp_entry_path The initial path curl ended up in when logging on to the remote FTP server. header_json A JSON object with all HTTP response headers from the recent transfer. Values are provided as arrays, since in the case of multiple headers there can be multiple values. (Added in 7.83.0) The header names provided in lowercase, listed in order of appearance over the wire. Except for duplicated headers. They are grouped on the first occurrence of that header, each value is presented in the JSON array. http_code The numerical response code that was found in the last retrieved HTTP(S) or FTP(s) transfer. http_connect The numerical code that was found in the last response (from a proxy) to a curl CONNECT request. http_version The http version that was effectively used. (Added in 7.50.0) json A JSON object with all available keys. (Added in 7.70.0) local_ip The IP address of the local end of the most recently done connection - can be either IPv4 or IPv6. local_port The local port number of the most recently done connection. method The http method used in the most recent HTTP request. (Added in 7.72.0) num_certs Number of server certificates received in the TLS handshake. Supported only by the OpenSSL, GnuTLS, Schannel and Secure Transport backends. (Added in 7.88.0) num_connects Number of new connects made in the recent transfer. num_headers The number of response headers in the most recent request (restarted at each redirect). Note that the status line IS NOT a header. (Added in 7.73.0) num_redirects Number of redirects that were followed in the request. onerror The rest of the output is only shown if the transfer returned a non-zero error. (Added in 7.75.0) proxy_ssl_verify_result The result of the HTTPS proxy's SSL peer certificate verification that was requested. 0 means the verification was successful. (Added in 7.52.0) redirect_url When an HTTP request was made without -L, --location to follow redirects (or when --max-redirs is met), this variable shows the actual URL a redirect would have gone to. referer The Referer: header, if there was any. (Added in 7.76.0) remote_ip The remote IP address of the most recently done connection - can be either IPv4 or IPv6. remote_port The remote port number of the most recently done connection. response_code The numerical response code that was found in the last transfer (formerly known as "http_code"). scheme The URL scheme (sometimes called protocol) that was effectively used. (Added in 7.52.0) size_download The total amount of bytes that were downloaded. This is the size of the body/data that was transferred, excluding headers. size_header The total amount of bytes of the downloaded headers. size_request The total amount of bytes that were sent in the HTTP request. size_upload The total amount of bytes that were uploaded. This is the size of the body/data that was transferred, excluding headers. speed_download The average download speed that curl measured for the complete download. Bytes per second. speed_upload The average upload speed that curl measured for the complete upload. Bytes per second. ssl_verify_result The result of the SSL peer certificate verification that was requested. 0 means the verification was successful. stderr From this point on, the -w, --write-out output is written to standard error. (Added in 7.63.0) stdout From this point on, the -w, --write-out output is written to standard output. This is the default, but can be used to switch back after switching to stderr. (Added in 7.63.0) time_appconnect The time, in seconds, it took from the start until the SSL/SSH/etc connect/handshake to the remote host was completed. time_connect The time, in seconds, it took from the start until the TCP connect to the remote host (or proxy) was completed. time_namelookup The time, in seconds, it took from the start until the name resolving was completed. time_pretransfer The time, in seconds, it took from the start until the file transfer was just about to begin. This includes all pre-transfer commands and negotiations that are specific to the particular protocol(s) involved. time_redirect The time, in seconds, it took for all redirection steps including name lookup, connect, pretransfer and transfer before the final transaction was started. time_redirect shows the complete execution time for multiple redirections. time_starttransfer The time, in seconds, it took from the start until the first byte is received. This includes time_pretransfer and also the time the server needed to calculate the result. time_total The total time, in seconds, that the full operation lasted. url The URL that was fetched. (Added in 7.75.0) url.scheme The scheme part of the URL that was fetched. (Added in 8.1.0) url.user The user part of the URL that was fetched. (Added in 8.1.0) url.password The password part of the URL that was fetched. (Added in 8.1.0) url.options The options part of the URL that was fetched. (Added in 8.1.0) url.host The host part of the URL that was fetched. (Added in 8.1.0) url.port The port number of the URL that was fetched. If no port number was specified, but the URL scheme is known, that scheme's default port number is shown. (Added in 8.1.0) url.path The path part of the URL that was fetched. (Added in 8.1.0) url.query The query part of the URL that was fetched. (Added in 8.1.0) url.fragment The fragment part of the URL that was fetched. (Added in 8.1.0) url.zoneid The zone id part of the URL that was fetched. (Added in 8.1.0) urle.scheme The scheme part of the effective (last) URL that was fetched. (Added in 8.1.0) urle.user The user part of the effective (last) URL that was fetched. (Added in 8.1.0) urle.password The password part of the effective (last) URL that was fetched. (Added in 8.1.0) urle.options The options part of the effective (last) URL that was fetched. (Added in 8.1.0) urle.host The host part of the effective (last) URL that was fetched. (Added in 8.1.0) urle.port The port number of the effective (last) URL that was fetched. If no port number was specified, but the URL scheme is known, that scheme's default port number is shown. (Added in 8.1.0) urle.path The path part of the effective (last) URL that was fetched. (Added in 8.1.0) urle.query The query part of the effective (last) URL that was fetched. (Added in 8.1.0) urle.fragment The fragment part of the effective (last) URL that was fetched. (Added in 8.1.0) urle.zoneid The zone id part of the effective (last) URL that was fetched. (Added in 8.1.0) urlnum The URL index number of this transfer, 0-indexed. Unglobbed URLs share the same index number as the origin globbed URL. (Added in 7.75.0) url_effective The URL that was fetched last. This is most meaningful if you have told curl to follow location: headers. If -w, --write-out is provided several times, the last set value is used. Example: curl -w '%{response_code}\n' https://example.com See also -v, --verbose and -I, --head. --xattr When saving output to a file, this option tells curl to store certain file metadata in extended file attributes. Currently, the URL is stored in the xdg.origin.url attribute and, for HTTP, the content type is stored in the mime_type attribute. If the file system does not support extended attributes, a warning is issued. Providing --xattr multiple times has no extra effect. Disable it again with --no-xattr. Example: curl --xattr -o storage https://example.com See also -R, --remote-time, -w, --write-out and -v, --verbose. FILES top ~/.curlrc Default config file, see -K, --config for details. ENVIRONMENT top The environment variables can be specified in lower case or upper case. The lower case version has precedence. http_proxy is an exception as it is only available in lower case. Using an environment variable to set the proxy has the same effect as using the -x, --proxy option. http_proxy [protocol://]<host>[:port] Sets the proxy server to use for HTTP. HTTPS_PROXY [protocol://]<host>[:port] Sets the proxy server to use for HTTPS. [url-protocol]_PROXY [protocol://]<host>[:port] Sets the proxy server to use for [url-protocol], where the protocol is a protocol that curl supports and as specified in a URL. FTP, FTPS, POP3, IMAP, SMTP, LDAP, etc. ALL_PROXY [protocol://]<host>[:port] Sets the proxy server to use if no protocol-specific proxy is set. NO_PROXY <comma-separated list of hosts/domains> list of host names that should not go through any proxy. If set to an asterisk '*' only, it matches all hosts. Each name in this list is matched as either a domain name which contains the hostname, or the hostname itself. This environment variable disables use of the proxy even when specified with the -x, --proxy option. That is NO_PROXY=direct.example.com curl -x http://proxy.example.com http://direct.example.com accesses the target URL directly, and NO_PROXY=direct.example.com curl -x http://proxy.example.com http://somewhere.example.com accesses the target URL through the proxy. The list of host names can also be include numerical IP addresses, and IPv6 versions should then be given without enclosing brackets. IP addresses can be specified using CIDR notation: an appended slash and number specifies the number of "network bits" out of the address to use in the comparison (added in 7.86.0). For example "192.168.0.0/16" would match all addresses starting with "192.168". APPDATA <dir> On Windows, this variable is used when trying to find the home directory. If the primary home variable are all unset. COLUMNS <terminal width> If set, the specified number of characters is used as the terminal width when the alternative progress-bar is shown. If not set, curl tries to figure it out using other ways. CURL_CA_BUNDLE <file> If set, it is used as the --cacert value. CURL_HOME <dir> If set, is the first variable curl checks when trying to find its home directory. If not set, it continues to check XDG_CONFIG_HOME CURL_SSL_BACKEND <TLS backend> If curl was built with support for "MultiSSL", meaning that it has built-in support for more than one TLS backend, this environment variable can be set to the case insensitive name of the particular backend to use when curl is invoked. Setting a name that is not a built-in alternative makes curl stay with the default. SSL backend names (case-insensitive): bearssl, gnutls, mbedtls, openssl, rustls, schannel, secure-transport, wolfssl HOME <dir> If set, this is used to find the home directory when that is needed. Like when looking for the default .curlrc. CURL_HOME and XDG_CONFIG_HOME have preference. QLOGDIR <directory name> If curl was built with HTTP/3 support, setting this environment variable to a local directory makes curl produce qlogs in that directory, using file names named after the destination connection id (in hex). Do note that these files can become rather large. Works with the ngtcp2 and quiche QUIC backends. SHELL Used on VMS when trying to detect if using a DCL or a unix shell. SSL_CERT_DIR <dir> If set, it is used as the --capath value. SSL_CERT_FILE <path> If set, it is used as the --cacert value. SSLKEYLOGFILE <file name> If you set this environment variable to a file name, curl stores TLS secrets from its connections in that file when invoked to enable you to analyze the TLS traffic in real time using network analyzing tools such as Wireshark. This works with the following TLS backends: OpenSSL, libressl, BoringSSL, GnuTLS and wolfSSL. USERPROFILE <dir> On Windows, this variable is used when trying to find the home directory. If the other, primary, variable are all unset. If set, curl uses the path "$USERPROFILE\Application Data". XDG_CONFIG_HOME <dir> If CURL_HOME is not set, this variable is checked when looking for a default .curlrc file. PROXY PROTOCOL PREFIXES top The proxy string may be specified with a protocol:// prefix to specify alternative proxy protocols. If no protocol is specified in the proxy string or if the string does not match a supported one, the proxy is treated as an HTTP proxy. The supported proxy protocol prefixes are as follows: http:// Makes it use it as an HTTP proxy. The default if no scheme prefix is used. https:// Makes it treated as an HTTPS proxy. socks4:// Makes it the equivalent of --socks4 socks4a:// Makes it the equivalent of --socks4a socks5:// Makes it the equivalent of --socks5 socks5h:// Makes it the equivalent of --socks5-hostname EXIT CODES top There are a bunch of different error codes and their corresponding error messages that may appear under error conditions. At the time of this writing, the exit codes are: 0 Success. The operation completed successfully according to the instructions. 1 Unsupported protocol. This build of curl has no support for this protocol. 2 Failed to initialize. 3 URL malformed. The syntax was not correct. 4 A feature or option that was needed to perform the desired request was not enabled or was explicitly disabled at build-time. To make curl able to do this, you probably need another build of libcurl. 5 Could not resolve proxy. The given proxy host could not be resolved. 6 Could not resolve host. The given remote host could not be resolved. 7 Failed to connect to host. 8 Weird server reply. The server sent data curl could not parse. 9 FTP access denied. The server denied login or denied access to the particular resource or directory you wanted to reach. Most often you tried to change to a directory that does not exist on the server. 10 FTP accept failed. While waiting for the server to connect back when an active FTP session is used, an error code was sent over the control connection or similar. 11 FTP weird PASS reply. Curl could not parse the reply sent to the PASS request. 12 During an active FTP session while waiting for the server to connect back to curl, the timeout expired. 13 FTP weird PASV reply, Curl could not parse the reply sent to the PASV request. 14 FTP weird 227 format. Curl could not parse the 227-line the server sent. 15 FTP cannot use host. Could not resolve the host IP we got in the 227-line. 16 HTTP/2 error. A problem was detected in the HTTP2 framing layer. This is somewhat generic and can be one out of several problems, see the error message for details. 17 FTP could not set binary. Could not change transfer method to binary. 18 Partial file. Only a part of the file was transferred. 19 FTP could not download/access the given file, the RETR (or similar) command failed. 21 FTP quote error. A quote command returned error from the server. 22 HTTP page not retrieved. The requested URL was not found or returned another error with the HTTP error code being 400 or above. This return code only appears if -f, --fail is used. 23 Write error. Curl could not write data to a local filesystem or similar. 25 Failed starting the upload. For FTP, the server typically denied the STOR command. 26 Read error. Various reading problems. 27 Out of memory. A memory allocation request failed. 28 Operation timeout. The specified time-out period was reached according to the conditions. 30 FTP PORT failed. The PORT command failed. Not all FTP servers support the PORT command, try doing a transfer using PASV instead! 31 FTP could not use REST. The REST command failed. This command is used for resumed FTP transfers. 33 HTTP range error. The range "command" did not work. 34 HTTP post error. Internal post-request generation error. 35 SSL connect error. The SSL handshaking failed. 36 Bad download resume. Could not continue an earlier aborted download. 37 FILE could not read file. Failed to open the file. Permissions? 38 LDAP cannot bind. LDAP bind operation failed. 39 LDAP search failed. 41 Function not found. A required LDAP function was not found. 42 Aborted by callback. An application told curl to abort the operation. 43 Internal error. A function was called with a bad parameter. 45 Interface error. A specified outgoing interface could not be used. 47 Too many redirects. When following redirects, curl hit the maximum amount. 48 Unknown option specified to libcurl. This indicates that you passed a weird option to curl that was passed on to libcurl and rejected. Read up in the manual! 49 Malformed telnet option. 52 The server did not reply anything, which here is considered an error. 53 SSL crypto engine not found. 54 Cannot set SSL crypto engine as default. 55 Failed sending network data. 56 Failure in receiving network data. 58 Problem with the local certificate. 59 Could not use specified SSL cipher. 60 Peer certificate cannot be authenticated with known CA certificates. 61 Unrecognized transfer encoding. 63 Maximum file size exceeded. 64 Requested FTP SSL level failed. 65 Sending the data requires a rewind that failed. 66 Failed to initialize SSL Engine. 67 The user name, password, or similar was not accepted and curl failed to log in. 68 File not found on TFTP server. 69 Permission problem on TFTP server. 70 Out of disk space on TFTP server. 71 Illegal TFTP operation. 72 Unknown TFTP transfer ID. 73 File already exists (TFTP). 74 No such user (TFTP). 77 Problem reading the SSL CA cert (path? access rights?). 78 The resource referenced in the URL does not exist. 79 An unspecified error occurred during the SSH session. 80 Failed to shut down the SSL connection. 82 Could not load CRL file, missing or wrong format. 83 Issuer check failed. 84 The FTP PRET command failed. 85 Mismatch of RTSP CSeq numbers. 86 Mismatch of RTSP Session Identifiers. 87 Unable to parse FTP file list. 88 FTP chunk callback reported error. 89 No connection available, the session is queued. 90 SSL public key does not matched pinned public key. 91 Invalid SSL certificate status. 92 Stream error in HTTP/2 framing layer. 93 An API function was called from inside a callback. 94 An authentication function returned an error. 95 A problem was detected in the HTTP/3 layer. This is somewhat generic and can be one out of several problems, see the error message for details. 96 QUIC connection error. This error may be caused by an SSL library error. QUIC is the protocol used for HTTP/3 transfers. 97 Proxy handshake error. 98 A client-side certificate is required to complete the TLS handshake. 99 Poll or select returned fatal error. XX More error codes might appear here in future releases. The existing ones are meant to never change. BUGS top If you experience any problems with curl, submit an issue in the project's bug tracker on GitHub: https://github.com/curl/curl/issues AUTHORS / CONTRIBUTORS top Daniel Stenberg is the main author, but the whole list of contributors is found in the separate THANKS file. WWW top https://curl.se SEE ALSO top ftp(1), wget(1) COLOPHON top This page is part of the curl (Command line tool and library for transferring data with URLs) project. Information about the project can be found at https://curl.haxx.se/. If you have a bug report for this manual page, see https://curl.haxx.se/docs/bugs.html. This page was obtained from the project's upstream Git repository https://github.com/curl/curl.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-22.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org curl 8.6.0 December 22 2023 curl(1) Pages that refer to this page: curl-config(1), git-config(1), mk-ca-bundle(1), pmwebapi(3), systemd-socket-proxyd(8), update-pciids(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # curl\n\n> Transfers data from or to a server.\n> Supports most protocols, including HTTP, FTP, and POP3.\n> More information: <https://curl.se/docs/manpage.html>.\n\n- Download the contents of a URL to a file:\n\n`curl {{http://example.com}} --output {{path/to/file}}`\n\n- Download a file, saving the output under the filename indicated by the URL:\n\n`curl --remote-name {{http://example.com/filename}}`\n\n- Download a file, following location redirects, and automatically continuing (resuming) a previous file transfer and return an error on server error:\n\n`curl --fail --remote-name --location --continue-at - {{http://example.com/filename}}`\n\n- Send form-encoded data (POST request of type `application/x-www-form-urlencoded`). Use `--data @file_name` or `--data @'-'` to read from STDIN:\n\n`curl --data {{'name=bob'}} {{http://example.com/form}}`\n\n- Send a request with an extra header, using a custom HTTP method:\n\n`curl --header {{'X-My-Header: 123'}} --request {{PUT}} {{http://example.com}}`\n\n- Send data in JSON format, specifying the appropriate content-type header:\n\n`curl --data {{'{"name":"bob"}'}} --header {{'Content-Type: application/json'}} {{http://example.com/users/1234}}`\n\n- Pass a username and prompt for a password to authenticate to the server:\n\n`curl --user {{username}} {{http://example.com}}`\n\n- Pass client certificate and key for a resource, skipping certificate validation:\n\n`curl --cert {{client.pem}} --key {{key.pem}} --insecure {{https://example.com}}`\n |
cut | cut(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training cut(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | AUTHOR | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON CUT(1) User Commands CUT(1) NAME top cut - remove sections from each line of files SYNOPSIS top cut OPTION... [FILE]... DESCRIPTION top Print selected parts of lines from each FILE to standard output. With no FILE, or when FILE is -, read standard input. Mandatory arguments to long options are mandatory for short options too. -b, --bytes=LIST select only these bytes -c, --characters=LIST select only these characters -d, --delimiter=DELIM use DELIM instead of TAB for field delimiter -f, --fields=LIST select only these fields; also print any line that contains no delimiter character, unless the -s option is specified -n (ignored) --complement complement the set of selected bytes, characters or fields -s, --only-delimited do not print lines not containing delimiters --output-delimiter=STRING use STRING as the output delimiter the default is to use the input delimiter -z, --zero-terminated line delimiter is NUL, not newline --help display this help and exit --version output version information and exit Use one, and only one of -b, -c or -f. Each LIST is made up of one range, or many ranges separated by commas. Selected input is written in the same order that it is read, and is written exactly once. Each range is one of: N N'th byte, character or field, counted from 1 N- from N'th byte, character or field, to end of line N-M from N'th to M'th (included) byte, character or field -M from first to M'th (included) byte, character or field AUTHOR top Written by David M. Ihnat, David MacKenzie, and Jim Meyering. REPORTING BUGS top GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT top Copyright 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top Full documentation <https://www.gnu.org/software/coreutils/cut> or available locally via: info '(coreutils) cut invocation' COLOPHON top This page is part of the coreutils (basic file, shell and text manipulation utilities) project. Information about the project can be found at http://www.gnu.org/software/coreutils/. If you have a bug report for this manual page, see http://www.gnu.org/software/coreutils/. This page was obtained from the tarball coreutils-9.4.tar.xz fetched from http://ftp.gnu.org/gnu/coreutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU coreutils 9.4 August 2023 CUT(1) Pages that refer to this page: man-pages(7) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # cut\n\n> Cut out fields from `stdin` or files.\n> More information: <https://www.gnu.org/software/coreutils/cut>.\n\n- Print a specific character/field range of each line:\n\n`{{command}} | cut --{{characters|fields}}={{1|1,10|1-10|1-|-10}}`\n\n- Print a field range of each line with a specific delimiter:\n\n`{{command}} | cut --delimiter="{{,}}" --fields={{1}}`\n\n- Print a character range of each line of the specific file:\n\n`cut --characters={{1}} {{path/to/file}}`\n |
dash | dash(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training dash(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | EXIT STATUS | ENVIRONMENT | FILES | SEE ALSO | HISTORY | BUGS | COLOPHON DASH(1) General Commands Manual DASH(1) NAME top dash command interpreter (shell) SYNOPSIS top [-aCefnuvxIimqVEb] [+aCefnuvxIimqVEb] [-o option_name] [+o option_name] [command_file [argument ...]] -c [-aCefnuvxIimqVEb] [+aCefnuvxIimqVEb] [-o option_name] [+o option_name] command_string [command_name [argument ...]] -s [-aCefnuvxIimqVEb] [+aCefnuvxIimqVEb] [-o option_name] [+o option_name] [argument ...] DESCRIPTION top is the standard command interpreter for the system. The current version of is in the process of being changed to conform with the POSIX 1003.2 and 1003.2a specifications for the shell. This version has many features which make it appear similar in some respects to the Korn shell, but it is not a Korn shell clone (see ksh(1)). Only features designated by POSIX, plus a few Berkeley extensions, are being incorporated into this shell. This man page is not intended to be a tutorial or a complete specification of the shell. Overview The shell is a command that reads lines from either a file or the terminal, interprets them, and generally executes other commands. It is the program that is running when a user logs into the system (although a user can select a different shell with the chsh(1) command). The shell implements a language that has flow control constructs, a macro facility that provides a variety of features in addition to data storage, along with built in history and line editing capabilities. It incorporates many features to aid interactive use and has the advantage that the interpretative language is common to both interactive and non-interactive use (shell scripts). That is, commands can be typed directly to the running shell or can be put into a file and the file can be executed directly by the shell. Invocation If no args are present and if the standard input of the shell is connected to a terminal (or if the -i flag is set), and the -c option is not present, the shell is considered an interactive shell. An interactive shell generally prompts before each command and handles programming and command errors differently (as described below). When first starting, the shell inspects argument 0, and if it begins with a dash -, the shell is also considered a login shell. This is normally done automatically by the system when the user first logs in. A login shell first reads commands from the files /etc/profile and .profile if they exist. If the environment variable ENV is set on entry to an interactive shell, or is set in the .profile of a login shell, the shell next reads commands from the file named in ENV. Therefore, a user should place commands that are to be executed only at login time in the .profile file, and commands that are executed for every interactive shell inside the ENV file. To set the ENV variable to some file, place the following line in your .profile of your home directory ENV=$HOME/.shinit; export ENV substituting for .shinit any filename you wish. If command line arguments besides the options have been specified, then the shell treats the first argument as the name of a file from which to read commands (a shell script), and the remaining arguments are set as the positional parameters of the shell ($1, $2, etc). Otherwise, the shell reads commands from its standard input. Argument List Processing All of the single letter options that have a corresponding name can be used as an argument to the -o option. The set -o name is provided next to the single letter option in the description below. Specifying a dash - turns the option on, while using a plus + disables the option. The following options can be set from the command line or with the set builtin (described later). -a allexport Export all variables assigned to. -c Read commands from the command_string operand instead of from the standard input. Special parameter 0 will be set from the command_name operand and the positional parameters ($1, $2, etc.) set from the remaining argument operands. -C noclobber Don't overwrite existing files with >. -e errexit If not interactive, exit immediately if any untested command fails. The exit status of a command is considered to be explicitly tested if the command is used to control an if, elif, while, or until; or if the command is the left hand operand of an && or || operator. -f noglob Disable pathname expansion. -n noexec If not interactive, read commands but do not execute them. This is useful for checking the syntax of shell scripts. -u nounset Write a message to standard error when attempting to expand a variable that is not set, and if the shell is not interactive, exit immediately. -v verbose The shell writes its input to standard error as it is read. Useful for debugging. -x xtrace Write each command to standard error (preceded by a + ) before it is executed. Useful for debugging. -I ignoreeof Ignore EOF's from input when interactive. -i interactive Force the shell to behave interactively. -l Make dash act as if it had been invoked as a login shell. -m monitor Turn on job control (set automatically when interactive). -s stdin Read commands from standard input (set automatically if no file arguments are present). This option has no effect when set after the shell has already started running (i.e. with set). -V vi Enable the built-in vi(1) command line editor (disables -E if it has been set). -E emacs Enable the built-in emacs(1) command line editor (disables -V if it has been set). -b notify Enable asynchronous notification of background job completion. (UNIMPLEMENTED for 4.4alpha) Lexical Structure The shell reads input in terms of lines from a file and breaks it up into words at whitespace (blanks and tabs), and at certain sequences of characters that are special to the shell called operators. There are two types of operators: control operators and redirection operators (their meaning is discussed later). Following is a list of operators: Control operators: & && ( ) ; ;; | || <newline> Redirection operators: < > >| << >> <& >& <<- <> Quoting Quoting is used to remove the special meaning of certain characters or words to the shell, such as operators, whitespace, or keywords. There are three types of quoting: matched single quotes, matched double quotes, and backslash. Backslash A backslash preserves the literal meaning of the following character, with the exception of newline. A backslash preceding a newline is treated as a line continuation. Single Quotes Enclosing characters in single quotes preserves the literal meaning of all the characters (except single quotes, making it impossible to put single-quotes in a single-quoted string). Double Quotes Enclosing characters within double quotes preserves the literal meaning of all characters except dollarsign ($), backquote (`), and backslash (\). The backslash inside double quotes is historically weird, and serves to quote only the following characters: $ ` " \ <newline>. Otherwise it remains literal. Reserved Words Reserved words are words that have special meaning to the shell and are recognized at the beginning of a line and after a control operator. The following are reserved words: ! elif fi while case else for then { } do done until if esac Their meaning is discussed later. Aliases An alias is a name and corresponding value set using the alias(1) builtin command. Whenever a reserved word may occur (see above), and after checking for reserved words, the shell checks the word to see if it matches an alias. If it does, it replaces it in the input stream with its value. For example, if there is an alias called lf with the value ls -F, then the input: lf foobar return would become ls -F foobar return Aliases provide a convenient way for naive users to create shorthands for commands without having to learn how to create functions with arguments. They can also be used to create lexically obscure code. This use is discouraged. Commands The shell interprets the words it reads according to a language, the specification of which is outside the scope of this man page (refer to the BNF in the POSIX 1003.2 document). Essentially though, a line is read and if the first word of the line (or after a control operator) is not a reserved word, then the shell has recognized a simple command. Otherwise, a complex command or some other special construct may have been recognized. Simple Commands If a simple command has been recognized, the shell performs the following actions: 1. Leading words of the form name=value are stripped off and assigned to the environment of the simple command. Redirection operators and their arguments (as described below) are stripped off and saved for processing. 2. The remaining words are expanded as described in the section called Expansions, and the first remaining word is considered the command name and the command is located. The remaining words are considered the arguments of the command. If no command name resulted, then the name=value variable assignments recognized in item 1 affect the current shell. 3. Redirections are performed as described in the next section. Redirections Redirections are used to change where a command reads its input or sends its output. In general, redirections open, close, or duplicate an existing reference to a file. The overall format used for redirection is: [n] redir-op file where redir-op is one of the redirection operators mentioned previously. Following is a list of the possible redirections. The [n] is an optional number between 0 and 9, as in 3 (not [3]), that refers to a file descriptor. [n]> file Redirect standard output (or n) to file. [n]>| file Same, but override the -C option. [n]>> file Append standard output (or n) to file. [n]< file Redirect standard input (or n) from file. [n1]<&n2 Copy file descriptor n2 as stdout (or fd n1). fd n2. [n]<&- Close standard input (or n). [n1]>&n2 Copy file descriptor n2 as stdin (or fd n1). fd n2. [n]>&- Close standard output (or n). [n]<> file Open file for reading and writing on standard input (or n). The following redirection is often called a here-document. [n]<< delimiter here-doc-text ... delimiter All the text on successive lines up to the delimiter is saved away and made available to the command on standard input, or file descriptor n if it is specified. If the delimiter as specified on the initial line is quoted, then the here-doc-text is treated literally, otherwise the text is subjected to parameter expansion, command substitution, and arithmetic expansion (as described in the section on Expansions). If the operator is <<- instead of <<, then leading tabs in the here-doc-text are stripped. Search and Execution There are three types of commands: shell functions, builtin commands, and normal programs and the command is searched for (by name) in that order. They each are executed in a different way. When a shell function is executed, all of the shell positional parameters (except $0, which remains unchanged) are set to the arguments of the shell function. The variables which are explicitly placed in the environment of the command (by placing assignments to them before the function name) are made local to the function and are set to the values given. Then the command given in the function definition is executed. The positional parameters are restored to their original values when the command completes. This all occurs within the current shell. Shell builtins are executed internally to the shell, without spawning a new process. Otherwise, if the command name doesn't match a function or builtin, the command is searched for as a normal program in the file system (as described in the next section). When a normal program is executed, the shell runs the program, passing the arguments and the environment to the program. If the program is not a normal executable file (i.e., if it does not begin with the "magic number" whose ASCII representation is "#!", so execve(2) returns ENOEXEC then) the shell will interpret the program in a subshell. The child shell will reinitialize itself in this case, so that the effect will be as if a new shell had been invoked to handle the ad-hoc shell script, except that the location of hashed commands located in the parent shell will be remembered by the child. Note that previous versions of this document and the source code itself misleadingly and sporadically refer to a shell script without a magic number as a "shell procedure". Path Search When locating a command, the shell first looks to see if it has a shell function by that name. Then it looks for a builtin command by that name. If a builtin command is not found, one of two things happen: 1. Command names containing a slash are simply executed without performing any searches. 2. The shell searches each entry in PATH in turn for the command. The value of the PATH variable should be a series of entries separated by colons. Each entry consists of a directory name. The current directory may be indicated implicitly by an empty directory name, or explicitly by a single period. Command Exit Status Each command has an exit status that can influence the behaviour of other shell commands. The paradigm is that a command exits with zero for normal or success, and non-zero for failure, error, or a false indication. The man page for each command should indicate the various exit codes and what they mean. Additionally, the builtin commands return exit codes, as does an executed shell function. If a command consists entirely of variable assignments then the exit status of the command is that of the last command substitution if any, otherwise 0. Complex Commands Complex commands are combinations of simple commands with control operators or reserved words, together creating a larger complex command. More generally, a command is one of the following: simple command pipeline list or compound-list compound command function definition Unless otherwise stated, the exit status of a command is that of the last simple command executed by the command. Pipelines A pipeline is a sequence of one or more commands separated by the control operator |. The standard output of all but the last command is connected to the standard input of the next command. The standard output of the last command is inherited from the shell, as usual. The format for a pipeline is: [!] command1 [| command2 ...] The standard output of command1 is connected to the standard input of command2. The standard input, standard output, or both of a command is considered to be assigned by the pipeline before any redirection specified by redirection operators that are part of the command. If the pipeline is not in the background (discussed later), the shell waits for all commands to complete. If the reserved word ! does not precede the pipeline, the exit status is the exit status of the last command specified in the pipeline. Otherwise, the exit status is the logical NOT of the exit status of the last command. That is, if the last command returns zero, the exit status is 1; if the last command returns greater than zero, the exit status is zero. Because pipeline assignment of standard input or standard output or both takes place before redirection, it can be modified by redirection. For example: $ command1 2>&1 | command2 sends both the standard output and standard error of command1 to the standard input of command2. A ; or newline terminator causes the preceding AND-OR-list (described next) to be executed sequentially; a & causes asynchronous execution of the preceding AND-OR-list. Note that unlike some other shells, each process in the pipeline is a child of the invoking shell (unless it is a shell builtin, in which case it executes in the current shell but any effect it has on the environment is wiped). Background Commands & If a command is terminated by the control operator ampersand (&), the shell executes the command asynchronously that is, the shell does not wait for the command to finish before executing the next command. The format for running a command in background is: command1 & [command2 & ...] If the shell is not interactive, the standard input of an asynchronous command is set to /dev/null. Lists Generally Speaking A list is a sequence of zero or more commands separated by newlines, semicolons, or ampersands, and optionally terminated by one of these three characters. The commands in a list are executed in the order they are written. If command is followed by an ampersand, the shell starts the command and immediately proceeds onto the next command; otherwise it waits for the command to terminate before proceeding to the next one. Short-Circuit List Operators && and || are AND-OR list operators. && executes the first command, and then executes the second command if and only if the exit status of the first command is zero. || is similar, but executes the second command if and only if the exit status of the first command is nonzero. && and || both have the same priority. Flow-Control Constructs if, while, for, case The syntax of the if command is if list then list [ elif list then list ] ... [ else list ] fi The syntax of the while command is while list do list done The two lists are executed repeatedly while the exit status of the first list is zero. The until command is similar, but has the word until in place of while, which causes it to repeat until the exit status of the first list is zero. The syntax of the for command is for variable [ in [ word ... ] ] do list done The words following in are expanded, and then the list is executed repeatedly with the variable set to each word in turn. Omitting in word ... is equivalent to in "$@". The syntax of the break and continue command is break [ num ] continue [ num ] Break terminates the num innermost for or while loops. Continue continues with the next iteration of the innermost loop. These are implemented as builtin commands. The syntax of the case command is case word in [(]pattern) list ;; ... esac The pattern can actually be one or more patterns (see Shell Patterns described later), separated by | characters. The ( character before the pattern is optional. Grouping Commands Together Commands may be grouped by writing either (list) or { list; } The first of these executes the commands in a subshell. Builtin commands grouped into a (list) will not affect the current shell. The second form does not fork another shell so is slightly more efficient. Grouping commands together this way allows you to redirect their output as though they were one program: { printf " hello " ; printf " world\n" ; } > greeting Note that } must follow a control operator (here, ;) so that it is recognized as a reserved word and not as another command argument. Functions The syntax of a function definition is name () command A function definition is an executable statement; when executed it installs a function named name and returns an exit status of zero. The command is normally a list enclosed between { and }. Variables may be declared to be local to a function by using a local command. This should appear as the first statement of a function, and the syntax is local [variable | -] ... Local is implemented as a builtin command. When a variable is made local, it inherits the initial value and exported and readonly flags from the variable with the same name in the surrounding scope, if there is one. Otherwise, the variable is initially unset. The shell uses dynamic scoping, so that if you make the variable x local to function f, which then calls function g, references to the variable x made inside g will refer to the variable x declared inside f, not to the global variable named x. The only special parameter that can be made local is -. Making - local any shell options that are changed via the set command inside the function to be restored to their original values when the function returns. The syntax of the return command is return [exitstatus] It terminates the currently executing function. Return is implemented as a builtin command. Variables and Parameters The shell maintains a set of parameters. A parameter denoted by a name is called a variable. When starting up, the shell turns all the environment variables into shell variables. New variables can be set using the form name=value Variables set by the user must have a name consisting solely of alphabetics, numerics, and underscores - the first of which must not be numeric. A parameter can also be denoted by a number or a special character as explained below. Positional Parameters A positional parameter is a parameter denoted by a number (n > 0). The shell sets these initially to the values of its command line arguments that follow the name of the shell script. The set builtin can also be used to set or reset them. Special Parameters A special parameter is a parameter denoted by one of the following special characters. The value of the parameter is listed next to its character. * Expands to the positional parameters, starting from one. When the expansion occurs within a double- quoted string it expands to a single field with the value of each parameter separated by the first character of the IFS variable, or by a space if IFS is unset. @ Expands to the positional parameters, starting from one. When the expansion occurs within double- quotes, each positional parameter expands as a separate argument. If there are no positional parameters, the expansion of @ generates zero arguments, even when @ is double-quoted. What this basically means, for example, is if $1 is abc and $2 is def ghi, then "$@" expands to the two arguments: "abc" "def ghi" # Expands to the number of positional parameters. ? Expands to the exit status of the most recent pipeline. - (Hyphen.) Expands to the current option flags (the single- letter option names concatenated into a string) as specified on invocation, by the set builtin command, or implicitly by the shell. $ Expands to the process ID of the invoked shell. A subshell retains the same value of $ as its parent. ! Expands to the process ID of the most recent background command executed from the current shell. For a pipeline, the process ID is that of the last command in the pipeline. 0 (Zero.) Expands to the name of the shell or shell script. Word Expansions This clause describes the various expansions that are performed on words. Not all expansions are performed on every word, as explained later. Tilde expansions, parameter expansions, command substitutions, arithmetic expansions, and quote removals that occur within a single word expand to a single field. It is only field splitting or pathname expansion that can create multiple fields from a single word. The single exception to this rule is the expansion of the special parameter @ within double-quotes, as was described above. The order of word expansion is: 1. Tilde Expansion, Parameter Expansion, Command Substitution, Arithmetic Expansion (these all occur at the same time). 2. Field Splitting is performed on fields generated by step (1) unless the IFS variable is null. 3. Pathname Expansion (unless set -f is in effect). 4. Quote Removal. The $ character is used to introduce parameter expansion, command substitution, or arithmetic evaluation. Tilde Expansion (substituting a user's home directory) A word beginning with an unquoted tilde character (~) is subjected to tilde expansion. All the characters up to a slash (/) or the end of the word are treated as a username and are replaced with the user's home directory. If the username is missing (as in ~/foobar), the tilde is replaced with the value of the HOME variable (the current user's home directory). Parameter Expansion The format for parameter expansion is as follows: ${expression} where expression consists of all characters until the matching }. Any } escaped by a backslash or within a quoted string, and characters in embedded arithmetic expansions, command substitutions, and variable expansions, are not examined in determining the matching }. The simplest form for parameter expansion is: ${parameter} The value, if any, of parameter is substituted. The parameter name or symbol can be enclosed in braces, which are optional except for positional parameters with more than one digit or when parameter is followed by a character that could be interpreted as part of the name. If a parameter expansion occurs inside double-quotes: 1. Pathname expansion is not performed on the results of the expansion. 2. Field splitting is not performed on the results of the expansion, with the exception of @. In addition, a parameter expansion can be modified by using one of the following formats. ${parameter:-word} Use Default Values. If parameter is unset or null, the expansion of word is substituted; otherwise, the value of parameter is substituted. ${parameter:=word} Assign Default Values. If parameter is unset or null, the expansion of word is assigned to parameter. In all cases, the final value of parameter is substituted. Only variables, not positional parameters or special parameters, can be assigned in this way. ${parameter:?[word]} Indicate Error if Null or Unset. If parameter is unset or null, the expansion of word (or a message indicating it is unset if word is omitted) is written to standard error and the shell exits with a nonzero exit status. Otherwise, the value of parameter is substituted. An interactive shell need not exit. ${parameter:+word} Use Alternative Value. If parameter is unset or null, null is substituted; otherwise, the expansion of word is substituted. In the parameter expansions shown previously, use of the colon in the format results in a test for a parameter that is unset or null; omission of the colon results in a test for a parameter that is only unset. ${#parameter} String Length. The length in characters of the value of parameter. The following four varieties of parameter expansion provide for substring processing. In each case, pattern matching notation (see Shell Patterns), rather than regular expression notation, is used to evaluate the patterns. If parameter is * or @, the result of the expansion is unspecified. Enclosing the full parameter expansion string in double-quotes does not cause the following four varieties of pattern characters to be quoted, whereas quoting characters within the braces has this effect. ${parameter%word} Remove Smallest Suffix Pattern. The word is expanded to produce a pattern. The parameter expansion then results in parameter, with the smallest portion of the suffix matched by the pattern deleted. ${parameter%%word} Remove Largest Suffix Pattern. The word is expanded to produce a pattern. The parameter expansion then results in parameter, with the largest portion of the suffix matched by the pattern deleted. ${parameter#word} Remove Smallest Prefix Pattern. The word is expanded to produce a pattern. The parameter expansion then results in parameter, with the smallest portion of the prefix matched by the pattern deleted. ${parameter##word} Remove Largest Prefix Pattern. The word is expanded to produce a pattern. The parameter expansion then results in parameter, with the largest portion of the prefix matched by the pattern deleted. Command Substitution Command substitution allows the output of a command to be substituted in place of the command name itself. Command substitution occurs when the command is enclosed as follows: $(command) or (backquoted version): `command` The shell expands the command substitution by executing command in a subshell environment and replacing the command substitution with the standard output of the command, removing sequences of one or more newlines at the end of the substitution. (Embedded newlines before the end of the output are not removed; however, during field splitting, they may be translated into spaces, depending on the value of IFS and quoting that is in effect.) Arithmetic Expansion Arithmetic expansion provides a mechanism for evaluating an arithmetic expression and substituting its value. The format for arithmetic expansion is as follows: $((expression)) The expression is treated as if it were in double-quotes, except that a double-quote inside the expression is not treated specially. The shell expands all tokens in the expression for parameter expansion, command substitution, and quote removal. Next, the shell treats this as an arithmetic expression and substitutes the value of the expression. White Space Splitting (Field Splitting) After parameter expansion, command substitution, and arithmetic expansion the shell scans the results of expansions and substitutions that did not occur in double-quotes for field splitting and multiple fields can result. The shell treats each character of the IFS as a delimiter and uses the delimiters to split the results of parameter expansion and command substitution into fields. Pathname Expansion (File Name Generation) Unless the -f flag is set, file name generation is performed after word splitting is complete. Each word is viewed as a series of patterns, separated by slashes. The process of expansion replaces the word with the names of all existing files whose names can be formed by replacing each pattern with a string that matches the specified pattern. There are two restrictions on this: first, a pattern cannot match a string containing a slash, and second, a pattern cannot match a string starting with a period unless the first character of the pattern is a period. The next section describes the patterns used for both Pathname Expansion and the case command. Shell Patterns A pattern consists of normal characters, which match themselves, and meta-characters. The meta-characters are !, *, ?, and [. These characters lose their special meanings if they are quoted. When command or variable substitution is performed and the dollar sign or back quotes are not double quoted, the value of the variable or the output of the command is scanned for these characters and they are turned into meta-characters. An asterisk (*) matches any string of characters. A question mark matches any single character. A left bracket ([) introduces a character class. The end of the character class is indicated by a (]); if the ] is missing then the [ matches a [ rather than introducing a character class. A character class matches any of the characters between the square brackets. A range of characters may be specified using a minus sign. The character class may be complemented by making an exclamation point the first character of the character class. To include a ] in a character class, make it the first character listed (after the !, if any). To include a minus sign, make it the first or last character listed. Builtins This section lists the builtin commands which are builtin because they need to perform some operation that can't be performed by a separate process. In addition to these, there are several other commands that may be builtin for efficiency (e.g. printf(1), echo(1), test(1), etc). : true A null command that returns a 0 (true) exit value. false A null command that returns a 1 (false) exit value. . file The commands in the specified file are read and executed by the shell. alias [name[=string ...]] If name=string is specified, the shell defines the alias name with value string. If just name is specified, the value of the alias name is printed. With no arguments, the alias builtin prints the names and values of all defined aliases (see unalias). bg [job] ... Continue the specified jobs (or the current job if no jobs are given) in the background. command [-p] [-v] [-V] command [arg ...] Execute the specified command but ignore shell functions when searching for it. (This is useful when you have a shell function with the same name as a builtin command.) -p search for command using a PATH that guarantees to find all the standard utilities. -V Do not execute the command but search for the command and print the resolution of the command search. This is the same as the type builtin. -v Do not execute the command but search for the command and print the absolute pathname of utilities, the name for builtins or the expansion of aliases. cd|chdir - cd|chdir [-LP] [directory] Switch to the specified directory (default HOME). If an entry for CDPATH appears in the environment of the cd command or the shell variable CDPATH is set and the directory name does not begin with a slash, then the directories listed in CDPATH will be searched for the specified directory. The format of CDPATH is the same as that of PATH. If a single dash is specified as the argument, it will be replaced by the value of OLDPWD. The cd command will print out the name of the directory that it actually switched to if this is different from the name that the user gave. These may be different either because the CDPATH mechanism was used or because the argument is a single dash. The -P option causes the physical directory structure to be used, that is, all symbolic links are resolved to their respective values. The -L option turns off the effect of any preceding -P options. echo [-n] args... Print the arguments on the standard output, separated by spaces. Unless the -n option is present, a newline is output following the arguments. If any of the following sequences of characters is encountered during output, the sequence is not output. Instead, the specified action is performed: \b A backspace character is output. \c Subsequent output is suppressed. This is normally used at the end of the last argument to suppress the trailing newline that echo would otherwise output. \f Output a form feed. \n Output a newline character. \r Output a carriage return. \t Output a (horizontal) tab character. \v Output a vertical tab. \0digits Output the character whose value is given by zero to three octal digits. If there are zero digits, a nul character is output. \\ Output a backslash. All other backslash sequences elicit undefined behaviour. eval string ... Concatenate all the arguments with spaces. Then re-parse and execute the command. exec [command arg ...] Unless command is omitted, the shell process is replaced with the specified program (which must be a real program, not a shell builtin or function). Any redirections on the exec command are marked as permanent, so that they are not undone when the exec command finishes. exit [exitstatus] Terminate the shell process. If exitstatus is given it is used as the exit status of the shell; otherwise the exit status of the preceding command is used. export name ... export -p The specified names are exported so that they will appear in the environment of subsequent commands. The only way to un-export a variable is to unset it. The shell allows the value of a variable to be set at the same time it is exported by writing export name=value With no arguments the export command lists the names of all exported variables. With the -p option specified the output will be formatted suitably for non-interactive use. fc [-e editor] [first [last]] fc -l [-nr] [first [last]] fc -s [old=new] [first] The fc builtin lists, or edits and re-executes, commands previously entered to an interactive shell. -e editor Use the editor named by editor to edit the commands. The editor string is a command name, subject to search via the PATH variable. The value in the FCEDIT variable is used as a default when -e is not specified. If FCEDIT is null or unset, the value of the EDITOR variable is used. If EDITOR is null or unset, ed(1) is used as the editor. -l (ell) List the commands rather than invoking an editor on them. The commands are written in the sequence indicated by the first and last operands, as affected by -r, with each command preceded by the command number. -n Suppress command numbers when listing with -l. -r Reverse the order of the commands listed (with -l) or edited (with neither -l nor -s). -s Re-execute the command without invoking an editor. first last Select the commands to list or edit. The number of previous commands that can be accessed are determined by the value of the HISTSIZE variable. The value of first or last or both are one of the following: [+]number A positive number representing a command number; command numbers can be displayed with the -l option. -number A negative decimal number representing the command that was executed number of commands previously. For example, -1 is the immediately previous command. string A string indicating the most recently entered command that begins with that string. If the old=new operand is not also specified with -s, the string form of the first operand cannot contain an embedded equal sign. The following environment variables affect the execution of fc: FCEDIT Name of the editor to use. HISTSIZE The number of previous commands that are accessible. fg [job] Move the specified job or the current job to the foreground. getopts optstring var [arg ...] The POSIX getopts command, not to be confused with the Bell Labs-derived getopt(1). The first argument should be a series of letters, each of which may be optionally followed by a colon to indicate that the option requires an argument. The variable specified is set to the parsed option. The getopts command deprecates the older getopt(1) utility due to its handling of arguments containing whitespace. The getopts builtin may be used to obtain options and their arguments from a list of parameters. When invoked, getopts places the value of the next option from the option string in the list in the shell variable specified by var and its index in the shell variable OPTIND. When the shell is invoked, OPTIND is initialized to 1. For each option that requires an argument, the getopts builtin will place it in the shell variable OPTARG. If an option is not allowed for in the optstring, then OPTARG will be unset. By default, the variables $1, ..., $n are inspected; if args are specified, they'll be parsed instead. optstring is a string of recognized option letters (see getopt(3)). If a letter is followed by a colon, the option is expected to have an argument which may or may not be separated from it by white space. If an option character is not found where expected, getopts will set the variable var to a ?; getopts will then unset OPTARG and write output to standard error. By specifying a colon as the first character of optstring all errors will be ignored. After the last option getopts will return a non-zero value and set var to ?. The following code fragment shows how one might process the arguments for a command that can take the options [a] and [b], and the option [c], which requires an argument. while getopts abc: f do case $f in a | b) flag=$f;; c) carg=$OPTARG;; \?) echo $USAGE; exit 1;; esac done shift $((OPTIND - 1)) This code will accept any of the following as equivalent: cmd -acarg file file cmd -a -c arg file file cmd -carg -a file file cmd -a -carg -- file file hash [command ...] hash -r The shell maintains a hash table which remembers the locations of commands. With no arguments whatsoever, the hash command prints out the contents of this table. Entries which have not been looked at since the last cd command are marked with an asterisk; it is possible for these entries to be invalid. With arguments, the hash command removes the specified commands from the hash table (unless they are functions) and then locates them. The -r option causes the hash command to delete all the entries in the hash table except for functions. jobs [-lp] [job ...] Display the status of all, or just the specified, jobs: By default display the job number, currency (+-) status, if any, the job state, and its shell command. -l also output the PID of the group leader, and just the PID and shell commands of other members of the job. -p Display only leader PIDs, one per line. kill [-s sigspec | -signum | -sigspec] [pid | job ...] Equivalent to kill(1), but a job spec may also be specified. Signals can be either case-insensitive names without SIG prefixes or decimal numbers; the default is TERM. kill -l [signum | exitstatus] List available signal names without the SIG prefix (sigspecs). If signum specified, display just the sigspec for that signal. If exitstatus specified (> 128), display just the sigspec that caused it. pwd [-LP] builtin command remembers what the current directory is rather than recomputing it each time. This makes it faster. However, if the current directory is renamed, the builtin version of pwd will continue to print the old name for the directory. The -P option causes the physical value of the current working directory to be shown, that is, all symbolic links are resolved to their respective values. The -L option turns off the effect of any preceding -P options. read [-p prompt] [-r] variable [...] The prompt is printed if the -p option is specified and the standard input is a terminal. Then a line is read from the standard input. The trailing newline is deleted from the line and the line is split as described in the section on word splitting above, and the pieces are assigned to the variables in order. At least one variable must be specified. If there are more pieces than variables, the remaining pieces (along with the characters in IFS that separated them) are assigned to the last variable. If there are more variables than pieces, the remaining variables are assigned the null string. The read builtin will indicate success unless EOF is encountered on input, in which case failure is returned. By default, unless the -r option is specified, the backslash \ acts as an escape character, causing the following character to be treated literally. If a backslash is followed by a newline, the backslash and the newline will be deleted. readonly name ... readonly -p The specified names are marked as read only, so that they cannot be subsequently modified or unset. The shell allows the value of a variable to be set at the same time it is marked read only by writing readonly name=value With no arguments the readonly command lists the names of all read only variables. With the -p option specified the output will be formatted suitably for non-interactive use. printf format [value]... printf formats and prints its arguments according to format, a character string which contains three types of objects: plain characters, which are simply copied to standard output, character escape sequences which are converted and copied to the standard output, and format specifications, each of which causes printing of the next successive value. Each value is treated as a string if the corresponding format specification is either b, c, or s; otherwise it is evaluated as a C constant, with the following additions: A leading plus or minus sign is allowed. If the leading character is a single or double quote, the value of the next byte. The format string is reused as often as necessary until all values are consumed. Any extra format specifications are evaluated with zero or the null string. Character escape sequences are in backslash notation as defined in ANSI X3.159-1989 (ANSI C89). The characters and their meanings are as follows: \a Write a <bell> character. \b Write a <backspace> character. \f Write a <form-feed> character. \n Write a <new-line> character. \r Write a <carriage return> character. \t Write a <tab> character. \v Write a <vertical tab> character. \\ Write a backslash character. \num Write an 8-bit character whose ASCII value is the 1-, 2-, or 3-digit octal number num. Each format specification is introduced by the percent character (``%''). The remainder of the format specification includes, in the following order: Zero or more of the following flags: # A `#' character specifying that the value should be printed in an ``alternative form''. For b, c, d, and s formats, this option has no effect. For the o format the precision of the number is increased to force the first character of the output string to a zero. For the x (X) format, a non-zero result has the string 0x (0X) prepended to it. For e, E, f, g, and G formats, the result will always contain a decimal point, even if no digits follow the point (normally, a decimal point only appears in the results of those formats if a digit follows the decimal point). For g and G formats, trailing zeros are not removed from the result as they would otherwise be. - A minus sign `-' which specifies left adjustment of the output in the indicated field; + A `+' character specifying that there should always be a sign placed before the number when using signed formats. A space specifying that a blank should be left before a positive number for a signed format. A `+' overrides a space if both are used; 0 A zero `0' character indicating that zero- padding should be used rather than blank- padding. A `-' overrides a `0' if both are used; Field Width: An optional digit string specifying a field width; if the output string has fewer characters than the field width it will be blank-padded on the left (or right, if the left-adjustment indicator has been given) to make up the field width (note that a leading zero is a flag, but an embedded zero is part of a field width); Precision: An optional period, ., followed by an optional digit string giving a precision which specifies the number of digits to appear after the decimal point, for e and f formats, or the maximum number of bytes to be printed from a string (b and s formats); if the digit string is missing, the precision is treated as zero; Format: A character which indicates the type of format to use (one of diouxXfwEgGbcs). A field width or precision may be * instead of a digit string. In this case an argument supplies the field width or precision. The format characters and their meanings are: diouXx The argument is printed as a signed decimal (d or i), unsigned octal, unsigned decimal, or unsigned hexadecimal (X or x), respectively. f The argument is printed in the style [-]ddd.ddd where the number of d's after the decimal point is equal to the precision specification for the argument. If the precision is missing, 6 digits are given; if the precision is explicitly 0, no digits and no decimal point are printed. eE The argument is printed in the style [-]d.dddedd where there is one digit before the decimal point and the number after is equal to the precision specification for the argument; when the precision is missing, 6 digits are produced. An upper-case E is used for an `E' format. gG The argument is printed in style f or in style e (E) whichever gives full precision in minimum space. b Characters from the string argument are printed with backslash-escape sequences expanded. The following additional backslash-escape sequences are supported: \c Causes to ignore any remaining characters in the string operand containing it, any remaining string operands, and any additional characters in the format operand. \0num Write an 8-bit character whose ASCII value is the 1-, 2-, or 3-digit octal number num. c The first character of argument is printed. s Characters from the string argument are printed until the end is reached or until the number of bytes indicated by the precision specification is reached; if the precision is omitted, all characters in the string are printed. % Print a `%'; no argument is used. In no case does a non-existent or small field width cause truncation of a field; padding takes place only if the specified field width exceeds the actual width. set [{ -options | +options | -- }] arg ... The set command performs three different functions. With no arguments, it lists the values of all shell variables. If options are given, it sets the specified option flags, or clears them as described in the section called Argument List Processing. As a special case, if the option is -o or +o and no argument is supplied, the shell prints the settings of all its options. If the option is -o, the settings are printed in a human-readable format; if the option is +o, the settings are printed in a format suitable for reinput to the shell to affect the same option settings. The third use of the set command is to set the values of the shell's positional parameters to the specified args. To change the positional parameters without changing any options, use -- as the first argument to set. If no args are present, the set command will clear all the positional parameters (equivalent to executing shift $#.) shift [n] Shift the positional parameters n times. A shift sets the value of $1 to the value of $2, the value of $2 to the value of $3, and so on, decreasing the value of $# by one. If n is greater than the number of positional parameters, shift will issue an error message, and exit with return status 2. test expression [ expression ] The test utility evaluates the expression and, if it evaluates to true, returns a zero (true) exit status; otherwise it returns 1 (false). If there is no expression, test also returns 1 (false). All operators and flags are separate arguments to the test utility. The following primaries are used to construct expression: -b file True if file exists and is a block special file. -c file True if file exists and is a character special file. -d file True if file exists and is a directory. -e file True if file exists (regardless of type). -f file True if file exists and is a regular file. -g file True if file exists and its set group ID flag is set. -h file True if file exists and is a symbolic link. -k file True if file exists and its sticky bit is set. -n string True if the length of string is nonzero. -p file True if file is a named pipe (FIFO). -r file True if file exists and is readable. -s file True if file exists and has a size greater than zero. -t file_descriptor True if the file whose file descriptor number is file_descriptor is open and is associated with a terminal. -u file True if file exists and its set user ID flag is set. -w file True if file exists and is writable. True indicates only that the write flag is on. The file is not writable on a read-only file system even if this test indicates true. -x file True if file exists and is executable. True indicates only that the execute flag is on. If file is a directory, true indicates that file can be searched. -z string True if the length of string is zero. -L file True if file exists and is a symbolic link. This operator is retained for compatibility with previous versions of this program. Do not rely on its existence; use -h instead. -O file True if file exists and its owner matches the effective user id of this process. -G file True if file exists and its group matches the effective group id of this process. -S file True if file exists and is a socket. file1 -nt file2 True if file1 and file2 exist and file1 is newer than file2. file1 -ot file2 True if file1 and file2 exist and file1 is older than file2. file1 -ef file2 True if file1 and file2 exist and refer to the same file. string True if string is not the null string. s1 = s2 True if the strings s1 and s2 are identical. s1 != s2 True if the strings s1 and s2 are not identical. s1 < s2 True if string s1 comes before s2 based on the ASCII value of their characters. s1 > s2 True if string s1 comes after s2 based on the ASCII value of their characters. n1 -eq n2 True if the integers n1 and n2 are algebraically equal. n1 -ne n2 True if the integers n1 and n2 are not algebraically equal. n1 -gt n2 True if the integer n1 is algebraically greater than the integer n2. n1 -ge n2 True if the integer n1 is algebraically greater than or equal to the integer n2. n1 -lt n2 True if the integer n1 is algebraically less than the integer n2. n1 -le n2 True if the integer n1 is algebraically less than or equal to the integer n2. These primaries can be combined with the following operators: ! expression True if expression is false. expression1 -a expression2 True if both expression1 and expression2 are true. expression1 -o expression2 True if either expression1 or expression2 are true. (expression) True if expression is true. The -a operator has higher precedence than the -o operator. times Print the accumulated user and system times for the shell and for processes run from the shell. The return status is 0. trap [action signal ...] Cause the shell to parse and execute action when any of the specified signals are received. The signals are specified by signal number or as the name of the signal. If signal is 0 or EXIT, the action is executed when the shell exits. action may be empty (''), which causes the specified signals to be ignored. With action omitted or set to `-' the specified signals are set to their default action. When the shell forks off a subshell, it resets trapped (but not ignored) signals to the default action. The trap command has no effect on signals that were ignored on entry to the shell. trap without any arguments cause it to write a list of signals and their associated action to the standard output in a format that is suitable as an input to the shell that achieves the same trapping results. Examples: trap List trapped signals and their corresponding action trap '' INT QUIT tstp 30 Ignore signals INT QUIT TSTP USR1 trap date INT Print date upon receiving signal INT type [name ...] Interpret each name as a command and print the resolution of the command search. Possible resolutions are: shell keyword, alias, shell builtin, command, tracked alias and not found. For aliases the alias expansion is printed; for commands and tracked aliases the complete pathname of the command is printed. ulimit [-H | -S] [-a | -tfdscmlpnvwr [value]] Inquire about or set the hard or soft limits on processes or set new limits. The choice between hard limit (which no process is allowed to violate, and which may not be raised once it has been lowered) and soft limit (which causes processes to be signaled but not necessarily killed, and which may be raised) is made with these flags: -H set or inquire about hard limits -S set or inquire about soft limits. If neither -H nor -S is specified, the soft limit is displayed or both limits are set. If both are specified, the last one wins. The limit to be interrogated or set, then, is chosen by specifying any one of these flags: -a show all the current limits -t show or set the limit on CPU time (in seconds) -f show or set the limit on the largest file that can be created (in 512-byte blocks) -d show or set the limit on the data segment size of a process (in kilobytes) -s show or set the limit on the stack size of a process (in kilobytes) -c show or set the limit on the largest core dump size that can be produced (in 512-byte blocks) -m show or set the limit on the total physical memory that can be in use by a process (in kilobytes) -l show or set the limit on how much memory a process can lock with mlock(2) (in kilobytes) -p show or set the limit on the number of processes this user can have at one time -n show or set the limit on the number files a process can have open at once -v show or set the limit on the total virtual memory that can be in use by a process (in kilobytes) -w show or set the limit on the total number of locks held by a process -r show or set the limit on the real-time scheduling priority of a process If none of these is specified, it is the limit on file size that is shown or set. If value is specified, the limit is set to that number; otherwise the current limit is displayed. Limits of an arbitrary process can be displayed or set using the sysctl(8) utility. umask [mask] Set the value of umask (see umask(2)) to the specified octal value. If the argument is omitted, the umask value is printed. unalias [-a] [name] If name is specified, the shell removes that alias. If -a is specified, all aliases are removed. unset [-fv] name ... The specified variables and functions are unset and unexported. If -f or -v is specified, the corresponding function or variable is unset, respectively. If a given name corresponds to both a variable and a function, and no options are given, only the variable is unset. wait [job] Wait for the specified job to complete and return the exit status of the last process in the job. If the argument is omitted, wait for all jobs to complete and return an exit status of zero. Command Line Editing When is being used interactively from a terminal, the current command and the command history (see fc in Builtins) can be edited using vi-mode command-line editing. This mode uses commands, described below, similar to a subset of those described in the vi man page. The command set -o vi enables vi-mode editing and places sh into vi insert mode. With vi-mode enabled, sh can be switched between insert mode and command mode. It is similar to vi: typing ESC enters vi command mode. Hitting return while in command mode will pass the line to the shell. EXIT STATUS top Errors that are detected by the shell, such as a syntax error, will cause the shell to exit with a non-zero exit status. If the shell is not an interactive shell, the execution of the shell file will be aborted. Otherwise the shell will return the exit status of the last command executed, or if the exit builtin is used with a numeric argument, it will return the argument. ENVIRONMENT top HOME Set automatically by login(1) from the user's login directory in the password file (passwd(4)). This environment variable also functions as the default argument for the cd builtin. PATH The default search path for executables. See the above section Path Search. CDPATH The search path used with the cd builtin. MAIL The name of a mail file, that will be checked for the arrival of new mail. Overridden by MAILPATH. MAILCHECK The frequency in seconds that the shell checks for the arrival of mail in the files specified by the MAILPATH or the MAIL file. If set to 0, the check will occur at each prompt. MAILPATH A colon : separated list of file names, for the shell to check for incoming mail. This environment setting overrides the MAIL setting. There is a maximum of 10 mailboxes that can be monitored at once. PS1 The primary prompt string, which defaults to $ , unless you are the superuser, in which case it defaults to # . PS2 The secondary prompt string, which defaults to > . PS4 Output before each line when execution trace (set -x) is enabled, defaults to + . IFS Input Field Separators. This is normally set to space, tab, and newline. See the White Space Splitting section for more details. TERM The default terminal setting for the shell. This is inherited by children of the shell, and is used in the history editing modes. HISTSIZE The number of lines in the history buffer for the shell. PWD The logical value of the current working directory. This is set by the cd command. OLDPWD The previous logical value of the current working directory. This is set by the cd command. PPID The process ID of the parent process of the shell. FILES top $HOME/.profile /etc/profile SEE ALSO top csh(1), echo(1), getopt(1), ksh(1), login(1), printf(1), test(1), getopt(3), passwd(5), environ(7), sysctl(8) HISTORY top is a POSIX-compliant implementation of /bin/sh that aims to be as small as possible. is a direct descendant of the NetBSD version of ash (the Almquist SHell), ported to Linux in early 1997. It was renamed to in 2002. BUGS top Setuid shell scripts should be avoided at all costs, as they are a significant security risk. PS1, PS2, and PS4 should be subject to parameter expansion before being displayed. COLOPHON top This page is part of the dash (Debian Almquist shell) project. Information about the project can be found at http://gondor.apana.org.au/~herbert/dash/. If you have a bug report for this manual page, send it to dash@vger.kernel.org. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/utils/dash/dash.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-01-09.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU January 19, 2003 DASH(1) Pages that refer to this page: intro(1), systemctl(1), system(3) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # dash\n\n> Debian Almquist Shell, a modern, POSIX-compliant implementation of `sh` (not Bash-compatible).\n> More information: <https://manned.org/dash>.\n\n- Start an interactive shell session:\n\n`dash`\n\n- Execute specific [c]ommands:\n\n`dash -c "{{echo 'dash is executed'}}"`\n\n- Execute a specific script:\n\n`dash {{path/to/script.sh}}`\n\n- Check a specific script for syntax errors:\n\n`dash -n {{path/to/script.sh}}`\n\n- Execute a specific script while printing each command before executing it:\n\n`dash -x {{path/to/script.sh}}`\n\n- Execute a specific script and stop at the first [e]rror:\n\n`dash -e {{path/to/script.sh}}`\n\n- Execute specific commands from `stdin`:\n\n`{{echo "echo 'dash is executed'"}} | dash`\n |
date | date(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training date(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | EXAMPLES | DATE STRING | AUTHOR | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON DATE(1) User Commands DATE(1) NAME top date - print or set the system date and time SYNOPSIS top date [OPTION]... [+FORMAT] date [-u|--utc|--universal] [MMDDhhmm[[CC]YY][.ss]] DESCRIPTION top Display date and time in the given FORMAT. With -s, or with [MMDDhhmm[[CC]YY][.ss]], set the date and time. Mandatory arguments to long options are mandatory for short options too. -d, --date=STRING display time described by STRING, not 'now' --debug annotate the parsed date, and warn about questionable usage to stderr -f, --file=DATEFILE like --date; once for each line of DATEFILE -I[FMT], --iso-8601[=FMT] output date/time in ISO 8601 format. FMT='date' for date only (the default), 'hours', 'minutes', 'seconds', or 'ns' for date and time to the indicated precision. Example: 2006-08-14T02:34:56-06:00 --resolution output the available resolution of timestamps Example: 0.000000001 -R, --rfc-email output date and time in RFC 5322 format. Example: Mon, 14 Aug 2006 02:34:56 -0600 --rfc-3339=FMT output date/time in RFC 3339 format. FMT='date', 'seconds', or 'ns' for date and time to the indicated precision. Example: 2006-08-14 02:34:56-06:00 -r, --reference=FILE display the last modification time of FILE -s, --set=STRING set time described by STRING -u, --utc, --universal print or set Coordinated Universal Time (UTC) --help display this help and exit --version output version information and exit All options that specify the date to display are mutually exclusive. I.e.: --date, --file, --reference, --resolution. FORMAT controls the output. Interpreted sequences are: %% a literal % %a locale's abbreviated weekday name (e.g., Sun) %A locale's full weekday name (e.g., Sunday) %b locale's abbreviated month name (e.g., Jan) %B locale's full month name (e.g., January) %c locale's date and time (e.g., Thu Mar 3 23:05:25 2005) %C century; like %Y, except omit last two digits (e.g., 20) %d day of month (e.g., 01) %D date; same as %m/%d/%y %e day of month, space padded; same as %_d %F full date; like %+4Y-%m-%d %g last two digits of year of ISO week number (see %G) %G year of ISO week number (see %V); normally useful only with %V %h same as %b %H hour (00..23) %I hour (01..12) %j day of year (001..366) %k hour, space padded ( 0..23); same as %_H %l hour, space padded ( 1..12); same as %_I %m month (01..12) %M minute (00..59) %n a newline %N nanoseconds (000000000..999999999) %p locale's equivalent of either AM or PM; blank if not known %P like %p, but lower case %q quarter of year (1..4) %r locale's 12-hour clock time (e.g., 11:11:04 PM) %R 24-hour hour and minute; same as %H:%M %s seconds since the Epoch (1970-01-01 00:00 UTC) %S second (00..60) %t a tab %T time; same as %H:%M:%S %u day of week (1..7); 1 is Monday %U week number of year, with Sunday as first day of week (00..53) %V ISO week number, with Monday as first day of week (01..53) %w day of week (0..6); 0 is Sunday %W week number of year, with Monday as first day of week (00..53) %x locale's date representation (e.g., 12/31/99) %X locale's time representation (e.g., 23:13:48) %y last two digits of year (00..99) %Y year %z +hhmm numeric time zone (e.g., -0400) %:z +hh:mm numeric time zone (e.g., -04:00) %::z +hh:mm:ss numeric time zone (e.g., -04:00:00) %:::z numeric time zone with : to necessary precision (e.g., -04, +05:30) %Z alphabetic time zone abbreviation (e.g., EDT) By default, date pads numeric fields with zeroes. The following optional flags may follow '%': - (hyphen) do not pad the field _ (underscore) pad with spaces 0 (zero) pad with zeros + pad with zeros, and put '+' before future years with >4 digits ^ use upper case if possible # use opposite case if possible After any flags comes an optional field width, as a decimal number; then an optional modifier, which is either E to use the locale's alternate representations if available, or O to use the locale's alternate numeric symbols if available. EXAMPLES top Convert seconds since the Epoch (1970-01-01 UTC) to a date $ date --date='@2147483647' Show the time on the west coast of the US (use tzselect(1) to find TZ) $ TZ='America/Los_Angeles' date Show the local time for 9AM next Friday on the west coast of the US $ date --date='TZ="America/Los_Angeles" 09:00 next Fri' DATE STRING top The --date=STRING is a mostly free format human readable date string such as "Sun, 29 Feb 2004 16:21:42 -0800" or "2004-02-29 16:21:42" or even "next Thursday". A date string may contain items indicating calendar date, time of day, time zone, day of week, relative time, relative date, and numbers. An empty string indicates the beginning of the day. The date string format is more complex than is easily documented here but is fully described in the info documentation. AUTHOR top Written by David MacKenzie. REPORTING BUGS top GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT top Copyright 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top Full documentation <https://www.gnu.org/software/coreutils/date> or available locally via: info '(coreutils) date invocation' COLOPHON top This page is part of the coreutils (basic file, shell and text manipulation utilities) project. Information about the project can be found at http://www.gnu.org/software/coreutils/. If you have a bug report for this manual page, see http://www.gnu.org/software/coreutils/. This page was obtained from the tarball coreutils-9.4.tar.xz fetched from http://ftp.gnu.org/gnu/coreutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU coreutils 9.4 August 2023 DATE(1) Pages that refer to this page: cronnext(1), dir(1), gawk(1), locale(1), ls(1), pmdashping(1), pmdate(1), timedatectl(1), vdir(1), clock_getres(2), gettimeofday(2), stime(2), time(2), ctime(3), difftime(3), posix_spawn(3), strftime(3), tzset(3), rtc(4), crontab(5), locale(5), utmp(5), lvmreport(7), time(7), hwclock(8), rtcwake(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # date\n\n> Set or display the system date.\n> More information: <https://www.gnu.org/software/coreutils/date>.\n\n- Display the current date using the default locale's format:\n\n`date +%c`\n\n- Display the current date in UTC, using the ISO 8601 format:\n\n`date -u +%Y-%m-%dT%H:%M:%S%Z`\n\n- Display the current date as a Unix timestamp (seconds since the Unix epoch):\n\n`date +%s`\n\n- Convert a date specified as a Unix timestamp to the default format:\n\n`date -d @{{1473305798}}`\n\n- Convert a given date to the Unix timestamp format:\n\n`date -d "{{2018-09-01 00:00}}" +%s --utc`\n\n- Display the current date using the RFC-3339 format (`YYYY-MM-DD hh:mm:ss TZ`):\n\n`date --rfc-3339=s`\n\n- Set the current date using the format `MMDDhhmmYYYY.ss` (`YYYY` and `.ss` are optional):\n\n`date {{093023592021.59}}`\n\n- Display the current ISO week number:\n\n`date +%V`\n |
dd | dd(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training dd(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | AUTHOR | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON DD(1) User Commands DD(1) NAME top dd - convert and copy a file SYNOPSIS top dd [OPERAND]... dd OPTION DESCRIPTION top Copy a file, converting and formatting according to the operands. bs=BYTES read and write up to BYTES bytes at a time (default: 512); overrides ibs and obs cbs=BYTES convert BYTES bytes at a time conv=CONVS convert the file as per the comma separated symbol list count=N copy only N input blocks ibs=BYTES read up to BYTES bytes at a time (default: 512) if=FILE read from FILE instead of stdin iflag=FLAGS read as per the comma separated symbol list obs=BYTES write BYTES bytes at a time (default: 512) of=FILE write to FILE instead of stdout oflag=FLAGS write as per the comma separated symbol list seek=N (or oseek=N) skip N obs-sized output blocks skip=N (or iseek=N) skip N ibs-sized input blocks status=LEVEL The LEVEL of information to print to stderr; 'none' suppresses everything but error messages, 'noxfer' suppresses the final transfer statistics, 'progress' shows periodic transfer statistics N and BYTES may be followed by the following multiplicative suffixes: c=1, w=2, b=512, kB=1000, K=1024, MB=1000*1000, M=1024*1024, xM=M, GB=1000*1000*1000, G=1024*1024*1024, and so on for T, P, E, Z, Y, R, Q. Binary prefixes can be used, too: KiB=K, MiB=M, and so on. If N ends in 'B', it counts bytes not blocks. Each CONV symbol may be: ascii from EBCDIC to ASCII ebcdic from ASCII to EBCDIC ibm from ASCII to alternate EBCDIC block pad newline-terminated records with spaces to cbs-size unblock replace trailing spaces in cbs-size records with newline lcase change upper case to lower case ucase change lower case to upper case sparse try to seek rather than write all-NUL output blocks swab swap every pair of input bytes sync pad every input block with NULs to ibs-size; when used with block or unblock, pad with spaces rather than NULs excl fail if the output file already exists nocreat do not create the output file notrunc do not truncate the output file noerror continue after read errors fdatasync physically write output file data before finishing fsync likewise, but also write metadata Each FLAG symbol may be: append append mode (makes sense only for output; conv=notrunc suggested) direct use direct I/O for data directory fail unless a directory dsync use synchronized I/O for data sync likewise, but also for metadata fullblock accumulate full blocks of input (iflag only) nonblock use non-blocking I/O noatime do not update access time nocache Request to drop cache. See also oflag=sync noctty do not assign controlling terminal from file nofollow do not follow symlinks Sending a USR1 signal to a running 'dd' process makes it print I/O statistics to standard error and then resume copying. Options are: --help display this help and exit --version output version information and exit AUTHOR top Written by Paul Rubin, David MacKenzie, and Stuart Kemp. REPORTING BUGS top GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT top Copyright 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top Full documentation <https://www.gnu.org/software/coreutils/dd> or available locally via: info '(coreutils) dd invocation' COLOPHON top This page is part of the coreutils (basic file, shell and text manipulation utilities) project. Information about the project can be found at http://www.gnu.org/software/coreutils/. If you have a bug report for this manual page, see http://www.gnu.org/software/coreutils/. This page was obtained from the tarball coreutils-9.4.tar.xz fetched from http://ftp.gnu.org/gnu/coreutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU coreutils 9.4 August 2023 DD(1) Pages that refer to this page: pipesz(1), truncate(1), xfs(5), fdisk(8), sfdisk(8), swapon(8), xfs_copy(8), xfs_repair(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # dd\n\n> Convert and copy a file.\n> More information: <https://www.gnu.org/software/coreutils/dd>.\n\n- Make a bootable USB drive from an isohybrid file (such as `archlinux-xxx.iso`) and show the progress:\n\n`dd status=progress if={{path/to/file.iso}} of={{/dev/usb_drive}}`\n\n- Clone a drive to another drive with 4 MiB block size and flush writes before the command terminates:\n\n`dd bs={{4M}} conv={{fsync}} if={{/dev/source_drive}} of={{/dev/dest_drive}}`\n\n- Generate a file with a specific number of random bytes by using kernel random driver:\n\n`dd bs={{100}} count={{1}} if=/dev/urandom of={{path/to/random_file}}`\n\n- Benchmark the write performance of a disk:\n\n`dd bs={{1M}} count={{1000000}} if=/dev/zero of={{path/to/file_1GB}}`\n\n- Create a system backup and save it into an IMG file (can be restored later by swapping `if` and `of`):\n\n`dd if={{/dev/drive_device}} of={{path/to/file.img}}`\n\n- Check the progress of an ongoing dd operation (run this command from another shell):\n\n`kill -USR1 $(pgrep -x dd)`\n |
debugfs | debugfs(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training debugfs(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | SPECIFYING FILES | COMMANDS | ENVIRONMENT VARIABLES | AUTHOR | SEE ALSO | COLOPHON DEBUGFS(8) System Manager's Manual DEBUGFS(8) NAME top debugfs - ext2/ext3/ext4 file system debugger SYNOPSIS top debugfs [ -DVwcin ] [ -b blocksize ] [ -s superblock ] [ -f cmd_file ] [ -R request ] [ -d data_source_device ] [ -z undo_file ] [ device ] DESCRIPTION top The debugfs program is an interactive file system debugger. It can be used to examine and change the state of an ext2, ext3, or ext4 file system. device is a block device (e.g., /dev/sdXX) or a file containing the file system. OPTIONS top -w Specifies that the file system should be opened in read- write mode. Without this option, the file system is opened in read-only mode. -n Disables metadata checksum verification. This should only be used if you believe the metadata to be correct despite the complaints of e2fsprogs. -c Specifies that the file system should be opened in catastrophic mode, in which the inode and group bitmaps are not read initially. This can be useful for file systems with significant corruption, but because of this, catastrophic mode forces the file system to be opened read-only. -i Specifies that device represents an ext2 image file created by the e2image program. Since the ext2 image file only contains the superblock, block group descriptor, block and inode allocation bitmaps, and the inode table, many debugfs commands will not function properly. Warning: no safety checks are in place, and debugfs may fail in interesting ways if commands such as ls, dump, etc. are tried without specifying the data_source_device using the -d option. debugfs is a debugging tool. It has rough edges! -d data_source_device Used with the -i option, specifies that data_source_device should be used when reading blocks not found in the ext2 image file. This includes data, directory, and indirect blocks. -b blocksize Forces the use of the given block size (in bytes) for the file system, rather than detecting the correct block size automatically. (This option is rarely needed; it is used primarily when the file system is extremely badly damaged/corrupted.) -s superblock Causes the file system superblock to be read from the given block number, instead of using the primary superblock (located at an offset of 1024 bytes from the beginning of the file system). If you specify the -s option, you must also provide the blocksize of the file system via the -b option. (This option is rarely needed; it is used primarily when the file system is extremely badly damaged/corrupted.) -f cmd_file Causes debugfs to read in commands from cmd_file, and execute them. When debugfs is finished executing those commands, it will exit. -D Causes debugfs to open the device using Direct I/O, bypassing the buffer cache. Note that some Linux devices, notably device mapper as of this writing, do not support Direct I/O. -R request Causes debugfs to execute the single command request, and then exit. -V print the version number of debugfs and exit. -z undo_file Before overwriting a file system block, write the old contents of the block to an undo file. This undo file can be used with e2undo(8) to restore the old contents of the file system should something go wrong. If the empty string is passed as the undo_file argument, the undo file will be written to a file named debugfs-device.e2undo in the directory specified via the E2FSPROGS_UNDO_DIR environment variable. WARNING: The undo file cannot be used to recover from a power or system crash. SPECIFYING FILES top Many debugfs commands take a filespec as an argument to specify an inode (as opposed to a pathname) in the file system which is currently opened by debugfs. The filespec argument may be specified in two forms. The first form is an inode number surrounded by angle brackets, e.g., <2>. The second form is a pathname; if the pathname is prefixed by a forward slash ('/'), then it is interpreted relative to the root of the file system which is currently opened by debugfs. If not, the pathname is interpreted relative to the current working directory as maintained by debugfs. This may be modified by using the debugfs command cd. COMMANDS top This is a list of the commands which debugfs supports. blocks filespec Print the blocks used by the inode filespec to stdout. bmap [ -a ] filespec logical_block [physical_block] Print or set the physical block number corresponding to the logical block number logical_block in the inode filespec. If the -a flag is specified, try to allocate a block if necessary. block_dump '[ -x ] [-f filespec] block_num Dump the file system block given by block_num in hex and ASCII format to the console. If the -f option is specified, the block number is relative to the start of the given filespec. If the -x option is specified, the block is interpreted as an extended attribute block and printed to show the structure of extended attribute data structures. cat filespec Dump the contents of the inode filespec to stdout. cd filespec Change the current working directory to filespec. chroot filespec Change the root directory to be the directory filespec. close [-a] Close the currently open file system. If the -a option is specified, write out any changes to the superblock and block group descriptors to all of the backup superblocks, not just to the master superblock. clri filespec Clear the contents of the inode filespec. copy_inode source_inode destination_inode Copy the contents of the inode structure in source_inode and use it to overwrite the inode structure at destination_inode. dirsearch filespec filename Search the directory filespec for filename. dirty [-clean] Mark the file system as dirty, so that the superblocks will be written on exit. Additionally, clear the superblock's valid flag, or set it if -clean is specified. dump [-p] filespec out_file Dump the contents of the inode filespec to the output file out_file. If the -p option is given set the owner, group and permissions information on out_file to match filespec. dump_mmp [mmp_block] Display the multiple-mount protection (mmp) field values. If mmp_block is specified then verify and dump the MMP values from the given block number, otherwise use the s_mmp_block field in the superblock to locate and use the existing MMP block. dx_hash [-h hash_alg] [-s hash_seed] filename Calculate the directory hash of filename. The hash algorithm specified with -h may be legacy, half_md4, or tea. The hash seed specified with -s must be in UUID format. dump_extents [-n] [-l] filespec Dump the extent tree of the inode filespec. The -n flag will cause dump_extents to only display the interior nodes in the extent tree. The -l flag will cause dump_extents to only display the leaf nodes in the extent tree. (Please note that the length and range of blocks for the last extent in an interior node is an estimate by the extents library functions, and is not stored in file system data structures. Hence, the values displayed may not necessarily by accurate and does not indicate a problem or corruption in the file system.) dump_unused Dump unused blocks which contain non-null bytes. ea_get [-f outfile]|[-xVC] [-r] filespec attr_name Retrieve the value of the extended attribute attr_name in the file filespec and write it either to stdout or to outfile. ea_list filespec List the extended attributes associated with the file filespec to standard output. ea_set [-f infile] [-r] filespec attr_name attr_value Set the value of the extended attribute attr_name in the file filespec to the string value attr_value or read it from infile. ea_rm filespec attr_names... Remove the extended attribute attr_name from the file filespec. expand_dir filespec Expand the directory filespec. fallocate filespec start_block [end_block] Allocate and map uninitialized blocks into filespec between logical block start_block and end_block, inclusive. If end_block is not supplied, this function maps until it runs out of free disk blocks or the maximum file size is reached. Existing mappings are left alone. feature [fs_feature] [-fs_feature] ... Set or clear various file system features in the superblock. After setting or clearing any file system features that were requested, print the current state of the file system feature set. filefrag [-dvr] filespec Print the number of contiguous extents in filespec. If filespec is a directory and the -d option is not specified, filefrag will print the number of contiguous extents for each file in the directory. The -v option will cause filefrag print a tabular listing of the contiguous extents in the file. The -r option will cause filefrag to do a recursive listing of the directory. find_free_block [count [goal]] Find the first count free blocks, starting from goal and allocate it. Also available as ffb. find_free_inode [dir [mode]] Find a free inode and allocate it. If present, dir specifies the inode number of the directory which the inode is to be located. The second optional argument mode specifies the permissions of the new inode. (If the directory bit is set on the mode, the allocation routine will function differently.) Also available as ffi. freeb block [count] Mark the block number block as not allocated. If the optional argument count is present, then count blocks starting at block number block will be marked as not allocated. freefrag [-c chunk_kb] Report free space fragmentation on the currently open file system. If the -c option is specified then the filefrag command will print how many free chunks of size chunk_kb can be found in the file system. The chunk size must be a power of two and be larger than the file system block size. freei filespec [num] Free the inode specified by filespec. If num is specified, also clear num-1 inodes after the specified inode. get_quota quota_type id Display quota information for given quota type (user, group, or project) and ID. help Print a list of commands understood by debugfs. htree_dump filespec Dump the hash-indexed directory filespec, showing its tree structure. icheck block ... Print a listing of the inodes which use the one or more blocks specified on the command line. inode_dump [-b]|[-e]|[-x] filespec Print the contents of the inode data structure in hex and ASCII format. The -b option causes the command to only dump the contents of the i_blocks array. The -e option causes the command to only dump the contents of the extra inode space, which is used to store in-line extended attributes. The -x option causes the command to dump the extra inode space interpreted and extended attributes. This is useful to debug corrupted inodes containing extended attributes. imap filespec Print the location of the inode data structure (in the inode table) of the inode filespec. init_filesys device blocksize Create an ext2 file system on device with device size blocksize. Note that this does not fully initialize all of the data structures; to do this, use the mke2fs(8) program. This is just a call to the low-level library, which sets up the superblock and block descriptors. journal_close Close the open journal. journal_open [-c] [-v ver] [-f ext_jnl] Opens the journal for reading and writing. Journal checksumming can be enabled by supplying -c; checksum formats 2 and 3 can be selected with the -v option. An external journal can be loaded from ext_jnl. journal_run Replay all transactions in the open journal. journal_write [-b blocks] [-r revoke] [-c] file Write a transaction to the open journal. The list of blocks to write should be supplied as a comma-separated list in blocks; the blocks themselves should be readable from file. A list of blocks to revoke can be supplied as a comma-separated list in revoke. By default, a commit record is written at the end; the -c switch writes an uncommitted transaction. kill_file filespec Deallocate the inode filespec and its blocks. Note that this does not remove any directory entries (if any) to this inode. See the rm(1) command if you wish to unlink a file. lcd directory Change the current working directory of the debugfs process to directory on the native file system. list_quota quota_type Display quota information for given quota type (user, group, or project). ln filespec dest_file Create a link named dest_file which is a hard link to filespec. Note this does not adjust the inode reference counts. logdump [-acsOS] [-b block] [-n num_trans ] [-i filespec] [-f journal_file] [output_file] Dump the contents of the ext3 journal. By default, dump the journal inode as specified in the superblock. However, this can be overridden with the -i option, which dumps the journal from the internal inode given by filespec. A regular file containing journal data can be specified using the -f option. Finally, the -s option utilizes the backup information in the superblock to locate the journal. The -S option causes logdump to print the contents of the journal superblock. The -a option causes the logdump to print the contents of all of the descriptor blocks. The -b option causes logdump to print all journal records that refer to the specified block. The -c option will print out the contents of all of the data blocks selected by the -a and -b options. The -O option causes logdump to display old (checkpointed) journal entries. This can be used to try to track down journal problems even after the journal has been replayed. The -n option causes logdump to continue past a journal block which is missing a magic number. Instead, it will stop only when the entire log is printed or after num_trans transactions. ls [-l] [-c] [-d] [-p] [-r] filespec Print a listing of the files in the directory filespec. The -c flag causes directory block checksums (if present) to be displayed. The -d flag will list deleted entries in the directory. The -l flag will list files using a more verbose format. The -p flag will list the files in a format which is more easily parsable by scripts, as well as making it more clear when there are spaces or other non-printing characters at the end of filenames. The -r flag will force the printing of the filename, even if it is encrypted. list_deleted_inodes [limit] List deleted inodes, optionally limited to those deleted within limit seconds ago. Also available as lsdel. This command was useful for recovering from accidental file deletions for ext2 file systems. Unfortunately, it is not useful for this purpose if the files were deleted using ext3 or ext4, since the inode's data blocks are no longer available after the inode is released. modify_inode filespec Modify the contents of the inode structure in the inode filespec. Also available as mi. mkdir filespec Make a directory. mknod filespec [p|[[c|b] major minor]] Create a special device file (a named pipe, character or block device). If a character or block device is to be made, the major and minor device numbers must be specified. ncheck [-c] inode_num ... Take the requested list of inode numbers, and print a listing of pathnames to those inodes. The -c flag will enable checking the file type information in the directory entry to make sure it matches the inode's type. open [-weficD] [-b blocksize] [-d image_filename] [-s superblock] [-z undo_file] device Open a file system for editing. The -f flag forces the file system to be opened even if there are some unknown or incompatible file system features which would normally prevent the file system from being opened. The -e flag causes the file system to be opened in exclusive mode. The -b, -c, -d, -i, -s, -w, and -D options behave the same as the command-line options to debugfs. punch filespec start_blk [end_blk] Delete the blocks in the inode ranging from start_blk to end_blk. If end_blk is omitted then this command will function as a truncate command; that is, all of the blocks starting at start_blk through to the end of the file will be deallocated. symlink filespec target Make a symbolic link. pwd Print the current working directory. quit Quit debugfs rdump directory[...] destination Recursively dump directory, or multiple directories, and all its contents (including regular files, symbolic links, and other directories) into the named destination, which should be an existing directory on the native file system. rm pathname Unlink pathname. If this causes the inode pointed to by pathname to have no other references, deallocate the file. This command functions as the unlink() system call. rmdir filespec Remove the directory filespec. setb block [count] Mark the block number block as allocated. If the optional argument count is present, then count blocks starting at block number block will be marked as allocated. set_block_group bgnum field value Modify the block group descriptor specified by bgnum so that the block group descriptor field field has value value. Also available as set_bg. set_current_time time Set current time in seconds since Unix epoch to use when setting file system fields. seti filespec [num] Mark inode filespec as in use in the inode bitmap. If num is specified, also set num-1 inodes after the specified inode. set_inode_field filespec field value Modify the inode specified by filespec so that the inode field field has value value. The list of valid inode fields which can be set via this command can be displayed by using the command: set_inode_field -l Also available as sif. set_mmp_value field value Modify the multiple-mount protection (MMP) data so that the MMP field field has value value. The list of valid MMP fields which can be set via this command can be displayed by using the command: set_mmp_value -l Also available as smmp. set_super_value field value Set the superblock field field to value. The list of valid superblock fields which can be set via this command can be displayed by using the command: set_super_value -l Also available as ssv. show_debugfs_params Display debugfs parameters such as information about currently opened file system. show_super_stats [-h] List the contents of the super block and the block group descriptors. If the -h flag is given, only print out the superblock contents. Also available as stats. stat filespec Display the contents of the inode structure of the inode filespec. supported_features Display file system features supported by this version of debugfs. testb block [count] Test if the block number block is marked as allocated in the block bitmap. If the optional argument count is present, then count blocks starting at block number block will be tested. testi filespec Test if the inode filespec is marked as allocated in the inode bitmap. undel <inode_number> [pathname] Undelete the specified inode number (which must be surrounded by angle brackets) so that it and its blocks are marked in use, and optionally link the recovered inode to the specified pathname. The e2fsck command should always be run after using the undel command to recover deleted files. Note that if you are recovering a large number of deleted files, linking the inode to a directory may require the directory to be expanded, which could allocate a block that had been used by one of the yet-to-be-undeleted files. So it is safer to undelete all of the inodes without specifying a destination pathname, and then in a separate pass, use the debugfs link command to link the inode to the destination pathname, or use e2fsck to check the file system and link all of the recovered inodes to the lost+found directory. unlink pathname Remove the link specified by pathname to an inode. Note this does not adjust the inode reference counts. write source_file out_file Copy the contents of source_file into a newly-created file in the file system named out_file. zap_block [-f filespec] [-o offset] [-l length] [-p pattern] block_num Overwrite the block specified by block_num with zero (NUL) bytes, or if -p is given use the byte specified by pattern. If -f is given then block_num is relative to the start of the file given by filespec. The -o and -l options limit the range of bytes to zap to the specified offset and length relative to the start of the block. zap_block [-f filespec] [-b bit] block_num Bit-flip portions of the physical block_num. If -f is given, then block_num is a logical block relative to the start of filespec. ENVIRONMENT VARIABLES top DEBUGFS_PAGER, PAGER The debugfs program always pipes the output of the some commands through a pager program. These commands include: show_super_stats (stats), list_directory (ls), show_inode_info (stat), list_deleted_inodes (lsdel), and htree_dump. The specific pager can explicitly specified by the DEBUGFS_PAGER environment variable, and if it is not set, by the PAGER environment variable. Note that since a pager is always used, the less(1) pager is not particularly appropriate, since it clears the screen before displaying the output of the command and clears the output the screen when the pager is exited. Many users prefer to use the less(1) pager for most purposes, which is why the DEBUGFS_PAGER environment variable is available to override the more general PAGER environment variable. AUTHOR top debugfs was written by Theodore Ts'o <tytso@mit.edu>. SEE ALSO top dumpe2fs(8), tune2fs(8), e2fsck(8), mke2fs(8), ext4(5) COLOPHON top This page is part of the e2fsprogs (utilities for ext2/3/4 filesystems) project. Information about the project can be found at http://e2fsprogs.sourceforge.net/. It is not known how to report bugs for this man page; if you know, please send a mail to man-pages@man7.org. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/fs/ext2/e2fsprogs.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-07.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org E2fsprogs version 1.47.0 February 2023 DEBUGFS(8) Pages that refer to this page: ext4(5), e2freefrag(8), e2fsck(8), e2image(8), tune2fs(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # debugfs\n\n> An interactive ext2/ext3/ext4 filesystem debugger.\n> More information: <https://manned.org/debugfs>.\n\n- Open the filesystem in read only mode:\n\n`debugfs {{/dev/sdXN}}`\n\n- Open the filesystem in read write mode:\n\n`debugfs -w {{/dev/sdXN}}`\n\n- Read commands from a specified file, execute them and then exit:\n\n`debugfs -f {{path/to/cmd_file}} {{/dev/sdXN}}`\n\n- View the filesystem stats in debugfs console:\n\n`stats`\n\n- Close the filesystem:\n\n`close -a`\n\n- List all available commands:\n\n`lr`\n |
delpart | delpart(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training delpart(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | SEE ALSO | REPORTING BUGS | AVAILABILITY DELPART(8) System Administration DELPART(8) NAME top delpart - tell the kernel to forget about a partition SYNOPSIS top delpart device partition DESCRIPTION top delpart asks the Linux kernel to forget about the specified partition (a number) on the specified device. The command is a simple wrapper around the "del partition" ioctl. This command doesnt manipulate partitions on a block device. OPTIONS top -h, --help Display help text and exit. -V, --version Print version and exit. SEE ALSO top addpart(8), fdisk(8), parted(8), partprobe(8), partx(8) REPORTING BUGS top For bug reports, use the issue tracker at https://github.com/util-linux/util-linux/issues. AVAILABILITY top The delpart command is part of the util-linux package which can be downloaded from Linux Kernel Archive <https://www.kernel.org/pub/linux/utils/util-linux/>. This page is part of the util-linux (a random collection of Linux utilities) project. Information about the project can be found at https://www.kernel.org/pub/linux/utils/util-linux/. If you have a bug report for this manual page, send it to util-linux@vger.kernel.org. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/utils/util-linux/util-linux.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-14.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org util-linux 2.39.594-1e0ad 2023-07-19 DELPART(8) Pages that refer to this page: addpart(8), partx(8), resizepart(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # delpart\n\n> Ask the Linux kernel to forget about a partition.\n> More information: <https://manned.org/delpart>.\n\n- Tell the kernel to forget about the first partition of `/dev/sda`:\n\n`sudo delpart {{/dev/sda}} {{1}}`\n |
delta | delta(1p) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training delta(1p) Linux manual page PROLOG | NAME | SYNOPSIS | DESCRIPTION | OPTIONS | OPERANDS | STDIN | INPUT FILES | ENVIRONMENT VARIABLES | ASYNCHRONOUS EVENTS | STDOUT | STDERR | OUTPUT FILES | EXTENDED DESCRIPTION | EXIT STATUS | CONSEQUENCES OF ERRORS | APPLICATION USAGE | EXAMPLES | RATIONALE | FUTURE DIRECTIONS | SEE ALSO | COPYRIGHT DELTA(1P) POSIX Programmer's Manual DELTA(1P) PROLOG top This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux. NAME top delta make a delta (change) to an SCCS file (DEVELOPMENT) SYNOPSIS top delta [-nps] [-g list] [-m mrlist] [-r SID] [-y[comment]] file... DESCRIPTION top The delta utility shall be used to permanently introduce into the named SCCS files changes that were made to the files retrieved by get (called the g-files, or generated files). OPTIONS top The delta utility shall conform to the Base Definitions volume of POSIX.12017, Section 12.2, Utility Syntax Guidelines, except that the -y option has an optional option-argument. This optional option-argument shall not be presented as a separate argument. The following options shall be supported: -r SID Uniquely identify which delta is to be made to the SCCS file. The use of this option shall be necessary only if two or more outstanding get commands for editing (get -e) on the same SCCS file were done by the same person (login name). The SID value specified with the -r option can be either the SID specified on the get command line or the SID to be made as reported by the get utility; see get(1p). -s Suppress the report to standard output of the activity associated with each file. See the STDOUT section. -n Specify retention of the edited g-file (normally removed at completion of delta processing). -g list Specify a list (see get(1p) for the definition of list) of deltas that shall be ignored when the file is accessed at the change level (SID) created by this delta. -m mrlist Specify a modification request (MR) number that the application shall supply as the reason for creating the new delta. This shall be used if the SCCS file has the v flag set; see admin(1p). If -m is not used and '-' is not specified as a file argument, and the standard input is a terminal, the prompt described in the STDOUT section shall be written to standard output before the standard input is read; if the standard input is not a terminal, no prompt shall be issued. MRs in a list shall be separated by <blank> characters or escaped <newline> characters. An unescaped <newline> shall terminate the MR list. The escape character is <backslash>. If the v flag has a value, it shall be taken to be the name of a program which validates the correctness of the MR numbers. If a non-zero exit status is returned from the MR number validation program, the delta utility shall terminate. (It is assumed that the MR numbers were not all valid.) -y[comment] Describe the reason for making the delta. The comment shall be an arbitrary group of lines that would meet the definition of a text file. Implementations shall support comments from zero to 512 bytes and may support longer values. A null string (specified as either -y, -y"", or in response to a prompt for a comment) shall be considered a valid comment. If -y is not specified and '-' is not specified as a file argument, and the standard input is a terminal, the prompt described in the STDOUT section shall be written to standard output before the standard input is read; if the standard input is not a terminal, no prompt shall be issued. An unescaped <newline> shall terminate the comment text. The escape character is <backslash>. The -y option shall be required if the file operand is specified as '-'. -p Write (to standard output) the SCCS file differences before and after the delta is applied in diff format; see diff(1p). OPERANDS top The following operand shall be supported: file A pathname of an existing SCCS file or a directory. If file is a directory, the delta utility shall behave as though each file in the directory were specified as a named file, except that non-SCCS files (last component of the pathname does not begin with s.) and unreadable files shall be silently ignored. If exactly one file operand appears, and it is '-', the standard input shall be read; each line of the standard input shall be taken to be the name of an SCCS file to be processed. Non-SCCS files and unreadable files shall be silently ignored. STDIN top The standard input shall be a text file used only in the following cases: * To read an mrlist or a comment (see the -m and -y options). * A file operand shall be specified as '-'. In this case, the -y option must be used to specify the comment, and if the SCCS file has the v flag set, the -m option must also be used to specify the MR list. INPUT FILES top Input files shall be text files whose data is to be included in the SCCS files. If the first character of any line of an input file is <SOH> in the POSIX locale, the results are unspecified. If this file contains more than 99999 lines, the number of lines recorded in the header for this file shall be 99999 for this delta. ENVIRONMENT VARIABLES top The following environment variables shall affect the execution of delta: LANG Provide a default value for the internationalization variables that are unset or null. (See the Base Definitions volume of POSIX.12017, Section 8.2, Internationalization Variables for the precedence of internationalization variables used to determine the values of locale categories.) LC_ALL If set to a non-empty string value, override the values of all the other internationalization variables. LC_CTYPE Determine the locale for the interpretation of sequences of bytes of text data as characters (for example, single-byte as opposed to multi-byte characters in arguments and input files). LC_MESSAGES Determine the locale that should be used to affect the format and contents of diagnostic messages written to standard error, and informative messages written to standard output. NLSPATH Determine the location of message catalogs for the processing of LC_MESSAGES. TZ Determine the timezone in which the time and date are written in the SCCS file. If the TZ variable is unset or NULL, an unspecified system default timezone is used. ASYNCHRONOUS EVENTS top If SIGINT is caught, temporary files shall be cleaned up and delta shall exit with a non-zero exit code. The standard action shall be taken for all other signals; see Section 1.4, Utility Description Defaults. STDOUT top The standard output shall be used only for the following messages in the POSIX locale: * Prompts (see the -m and -y options) in the following formats: "MRs? " "comments? " The MR prompt, if written, shall always precede the comments prompt. * A report of each file's activities (unless the -s option is specified) in the following format: "%s\n%d inserted\n%d deleted\n%d unchanged\n", <New SID>, <number of lines inserted>, <number of lines deleted>, <number of lines unchanged> STDERR top The standard error shall be used only for diagnostic messages. OUTPUT FILES top Any SCCS files updated shall be files of an unspecified format. EXTENDED DESCRIPTION top System Date and Time When a delta is added to an SCCS file, the system date and time shall be recorded for the new delta. If a get is performed using an SCCS file with a date recorded apparently in the future, the behavior is unspecified. EXIT STATUS top The following exit values shall be returned: 0 Successful completion. >0 An error occurred. CONSEQUENCES OF ERRORS top Default. The following sections are informative. APPLICATION USAGE top Problems can arise if the system date and time have been modified (for example, put forward and then back again, or unsynchronized clocks across a network) and can also arise when different values of the TZ environment variable are used. Problems of a similar nature can also arise for the operation of the get utility, which records the date and time in the file body. EXAMPLES top None. RATIONALE top None. FUTURE DIRECTIONS top None. SEE ALSO top Section 1.4, Utility Description Defaults, admin(1p), diff(1p), get(1p), prs(1p), rmdel(1p) The Base Definitions volume of POSIX.12017, Chapter 8, Environment Variables, Section 12.2, Utility Syntax Guidelines COPYRIGHT top Portions of this text are reprinted and reproduced in electronic form from IEEE Std 1003.1-2017, Standard for Information Technology -- Portable Operating System Interface (POSIX), The Open Group Base Specifications Issue 7, 2018 Edition, Copyright (C) 2018 by the Institute of Electrical and Electronics Engineers, Inc and The Open Group. In the event of any discrepancy between this version and the original IEEE and The Open Group Standard, the original IEEE and The Open Group Standard is the referee document. The original Standard can be obtained online at http://www.opengroup.org/unix/online.html . Any typographical or formatting errors that appear in this page are most likely to have been introduced during the conversion of the source files to man page format. To report such errors, see https://www.kernel.org/doc/man-pages/reporting_bugs.html . IEEE/The Open Group 2017 DELTA(1P) Pages that refer to this page: admin(1p), get(1p), prs(1p), rmdel(1p), sact(1p), sccs(1p), unget(1p), val(1p) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # delta\n\n> A viewer for Git and diff output.\n> More information: <https://github.com/dandavison/delta>.\n\n- Compare files or directories:\n\n`delta {{path/to/old_file_or_directory}} {{path/to/new_file_or_directory}}`\n\n- Compare files or directories, showing the line numbers:\n\n`delta --line-numbers {{path/to/old_file_or_directory}} {{path/to/new_file_or_directory}}`\n\n- Compare files or directories, showing the differences side by side:\n\n`delta --side-by-side {{path/to/old_file_or_directory}} {{path/to/new_file_or_directory}}`\n\n- Compare files or directories, ignoring any Git configuration settings:\n\n`delta --no-gitconfig {{path/to/old_file_or_directory}} {{path/to/new_file_or_directory}}`\n\n- Compare, rendering commit hashes, file names, and line numbers as hyperlinks, according to the hyperlink spec for terminal emulators:\n\n`delta --hyperlinks {{path/to/old_file_or_directory}} {{path/to/new_file_or_directory}}`\n\n- Display the current settings:\n\n`delta --show-config`\n\n- Display supported languages and associated file extensions:\n\n`delta --list-languages`\n |
df | df(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training df(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | AUTHOR | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON DF(1) User Commands DF(1) NAME top df - report file system space usage SYNOPSIS top df [OPTION]... [FILE]... DESCRIPTION top This manual page documents the GNU version of df. df displays the amount of space available on the file system containing each file name argument. If no file name is given, the space available on all currently mounted file systems is shown. Space is shown in 1K blocks by default, unless the environment variable POSIXLY_CORRECT is set, in which case 512-byte blocks are used. If an argument is the absolute file name of a device node containing a mounted file system, df shows the space available on that file system rather than on the file system containing the device node. This version of df cannot show the space available on unmounted file systems, because on most kinds of systems doing so requires non-portable intimate knowledge of file system structures. OPTIONS top Show information about the file system on which each FILE resides, or all file systems by default. Mandatory arguments to long options are mandatory for short options too. -a, --all include pseudo, duplicate, inaccessible file systems -B, --block-size=SIZE scale sizes by SIZE before printing them; e.g., '-BM' prints sizes in units of 1,048,576 bytes; see SIZE format below -h, --human-readable print sizes in powers of 1024 (e.g., 1023M) -H, --si print sizes in powers of 1000 (e.g., 1.1G) -i, --inodes list inode information instead of block usage -k like --block-size=1K -l, --local limit listing to local file systems --no-sync do not invoke sync before getting usage info (default) --output[=FIELD_LIST] use the output format defined by FIELD_LIST, or print all fields if FIELD_LIST is omitted. -P, --portability use the POSIX output format --sync invoke sync before getting usage info --total elide all entries insignificant to available space, and produce a grand total -t, --type=TYPE limit listing to file systems of type TYPE -T, --print-type print file system type -x, --exclude-type=TYPE limit listing to file systems not of type TYPE -v (ignored) --help display this help and exit --version output version information and exit Display values are in units of the first available SIZE from --block-size, and the DF_BLOCK_SIZE, BLOCK_SIZE and BLOCKSIZE environment variables. Otherwise, units default to 1024 bytes (or 512 if POSIXLY_CORRECT is set). The SIZE argument is an integer and optional unit (example: 10K is 10*1024). Units are K,M,G,T,P,E,Z,Y,R,Q (powers of 1024) or KB,MB,... (powers of 1000). Binary prefixes can be used, too: KiB=K, MiB=M, and so on. FIELD_LIST is a comma-separated list of columns to be included. Valid field names are: 'source', 'fstype', 'itotal', 'iused', 'iavail', 'ipcent', 'size', 'used', 'avail', 'pcent', 'file' and 'target' (see info page). AUTHOR top Written by Torbjorn Granlund, David MacKenzie, and Paul Eggert. REPORTING BUGS top GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT top Copyright 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top Full documentation <https://www.gnu.org/software/coreutils/df> or available locally via: info '(coreutils) df invocation' COLOPHON top This page is part of the coreutils (basic file, shell and text manipulation utilities) project. Information about the project can be found at http://www.gnu.org/software/coreutils/. If you have a bug report for this manual page, see http://www.gnu.org/software/coreutils/. This page was obtained from the tarball coreutils-9.4.tar.xz fetched from http://ftp.gnu.org/gnu/coreutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU coreutils 9.4 August 2023 DF(1) Pages that refer to this page: fstab(5), tmpfs(5), findmnt(8), xfs_quota(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # df\n\n> Display an overview of the filesystem disk space usage.\n> More information: <https://www.gnu.org/software/coreutils/df>.\n\n- Display all filesystems and their disk usage:\n\n`df`\n\n- Display all filesystems and their disk usage in human-readable form:\n\n`df -h`\n\n- Display the filesystem and its disk usage containing the given file or directory:\n\n`df {{path/to/file_or_directory}}`\n\n- Include statistics on the number of free inodes:\n\n`df -i`\n\n- Display filesystems but exclude the specified types:\n\n`df -x {{squashfs}} -x {{tmpfs}}`\n |
diff | diff(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training diff(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | AUTHOR | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON DIFF(1) User Commands DIFF(1) NAME top diff - compare files line by line SYNOPSIS top diff [OPTION]... FILES DESCRIPTION top Compare FILES line by line. Mandatory arguments to long options are mandatory for short options too. --normal output a normal diff (the default) -q, --brief report only when files differ -s, --report-identical-files report when two files are the same -c, -C NUM, --context[=NUM] output NUM (default 3) lines of copied context -u, -U NUM, --unified[=NUM] output NUM (default 3) lines of unified context -e, --ed output an ed script -n, --rcs output an RCS format diff -y, --side-by-side output in two columns -W, --width=NUM output at most NUM (default 130) print columns --left-column output only the left column of common lines --suppress-common-lines do not output common lines -p, --show-c-function show which C function each change is in -F, --show-function-line=RE show the most recent line matching RE --label LABEL use LABEL instead of file name and timestamp (can be repeated) -t, --expand-tabs expand tabs to spaces in output -T, --initial-tab make tabs line up by prepending a tab --tabsize=NUM tab stops every NUM (default 8) print columns --suppress-blank-empty suppress space or tab before empty output lines -l, --paginate pass output through 'pr' to paginate it -r, --recursive recursively compare any subdirectories found --no-dereference don't follow symbolic links -N, --new-file treat absent files as empty --unidirectional-new-file treat absent first files as empty --ignore-file-name-case ignore case when comparing file names --no-ignore-file-name-case consider case when comparing file names -x, --exclude=PAT exclude files that match PAT -X, --exclude-from=FILE exclude files that match any pattern in FILE -S, --starting-file=FILE start with FILE when comparing directories --from-file=FILE1 compare FILE1 to all operands; FILE1 can be a directory --to-file=FILE2 compare all operands to FILE2; FILE2 can be a directory -i, --ignore-case ignore case differences in file contents -E, --ignore-tab-expansion ignore changes due to tab expansion -Z, --ignore-trailing-space ignore white space at line end -b, --ignore-space-change ignore changes in the amount of white space -w, --ignore-all-space ignore all white space -B, --ignore-blank-lines ignore changes where lines are all blank -I, --ignore-matching-lines=RE ignore changes where all lines match RE -a, --text treat all files as text --strip-trailing-cr strip trailing carriage return on input -D, --ifdef=NAME output merged file with '#ifdef NAME' diffs --GTYPE-group-format=GFMT format GTYPE input groups with GFMT --line-format=LFMT format all input lines with LFMT --LTYPE-line-format=LFMT format LTYPE input lines with LFMT These format options provide fine-grained control over the output of diff, generalizing -D/--ifdef. LTYPE is 'old', 'new', or 'unchanged'. GTYPE is LTYPE or 'changed'. GFMT (only) may contain: %< lines from FILE1 %> lines from FILE2 %= lines common to FILE1 and FILE2 %[-][WIDTH][.[PREC]]{doxX}LETTER printf-style spec for LETTER LETTERs are as follows for new group, lower case for old group: F first line number L last line number N number of lines = L-F+1 E F-1 M L+1 %(A=B?T:E) if A equals B then T else E LFMT (only) may contain: %L contents of line %l contents of line, excluding any trailing newline %[-][WIDTH][.[PREC]]{doxX}n printf-style spec for input line number Both GFMT and LFMT may contain: %% % %c'C' the single character C %c'\OOO' the character with octal code OOO C the character C (other characters represent themselves) -d, --minimal try hard to find a smaller set of changes --horizon-lines=NUM keep NUM lines of the common prefix and suffix --speed-large-files assume large files and many scattered small changes --color[=WHEN] color output; WHEN is 'never', 'always', or 'auto'; plain --color means --color='auto' --palette=PALETTE the colors to use when --color is active; PALETTE is a colon-separated list of terminfo capabilities --help display this help and exit -v, --version output version information and exit FILES are 'FILE1 FILE2' or 'DIR1 DIR2' or 'DIR FILE' or 'FILE DIR'. If --from-file or --to-file is given, there are no restrictions on FILE(s). If a FILE is '-', read standard input. Exit status is 0 if inputs are the same, 1 if different, 2 if trouble. AUTHOR top Written by Paul Eggert, Mike Haertel, David Hayes, Richard Stallman, and Len Tower. REPORTING BUGS top Report bugs to: bug-diffutils@gnu.org GNU diffutils home page: <https://www.gnu.org/software/diffutils/> General help using GNU software: <https://www.gnu.org/gethelp/> COPYRIGHT top Copyright 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top wdiff(1), cmp(1), diff3(1), sdiff(1), patch(1) The full documentation for diff is maintained as a Texinfo manual. If the info and diff programs are properly installed at your site, the command info diff should give you access to the complete manual. COLOPHON top This page is part of the diffutils (GNU diff utilities) project. Information about the project can be found at http://savannah.gnu.org/projects/diffutils/. If you have a bug report for this manual page, send it to bug-diffutils@gnu.org. This page was obtained from the project's upstream Git repository git://git.savannah.gnu.org/diffutils.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-09-20.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org diffutils 3.10.207-774b December 2023 DIFF(1) Pages that refer to this page: cmp(1), diff3(1), gendiff(1), grep(1), patch(1), quilt(1), sdiff(1), suffixes(7) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # diff\n\n> Compare files and directories.\n> More information: <https://man7.org/linux/man-pages/man1/diff.1.html>.\n\n- Compare files (lists changes to turn `old_file` into `new_file`):\n\n`diff {{old_file}} {{new_file}}`\n\n- Compare files, ignoring [w]hite spaces:\n\n`diff {{-w|--ignore-all-space}} {{old_file}} {{new_file}}`\n\n- Compare files, showing the differences side by side:\n\n`diff {{-y|--side-by-side}} {{old_file}} {{new_file}}`\n\n- Compare files, showing the differences in [u]nified format (as used by `git diff`):\n\n`diff {{-u|--unified}} {{old_file}} {{new_file}}`\n\n- Compare directories [r]ecursively (shows names for differing files/directories as well as changes made to files):\n\n`diff {{-r|--recursive}} {{old_directory}} {{new_directory}}`\n\n- Compare directories, only showing the names of files that differ:\n\n`diff {{-r|--recursive}} {{-q|--brief}} {{old_directory}} {{new_directory}}`\n\n- Create a patch file for Git from the differences of two text files, treating nonexistent files as empty:\n\n`diff {{-a|--text}} {{-u|--unified}} {{-N|--new-file}} {{old_file}} {{new_file}} > {{diff.patch}}`\n\n- Compare files, showing output in color and try hard to find smaller set of changes:\n\n`diff {{-d|--minimal}} --color=always {{old_file}} {{new_file}}`\n |
diff3 | diff3(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training diff3(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | AUTHOR | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON DIFF3(1) User Commands DIFF3(1) NAME top diff3 - compare three files line by line SYNOPSIS top diff3 [OPTION]... MYFILE OLDFILE YOURFILE DESCRIPTION top Compare three files line by line. Mandatory arguments to long options are mandatory for short options too. -A, --show-all output all changes, bracketing conflicts -e, --ed output ed script incorporating changes from OLDFILE to YOURFILE into MYFILE -E, --show-overlap like -e, but bracket conflicts -3, --easy-only like -e, but incorporate only nonoverlapping changes -x, --overlap-only like -e, but incorporate only overlapping changes -X like -x, but bracket conflicts -i append 'w' and 'q' commands to ed scripts -m, --merge output actual merged file, according to -A if no other options are given -a, --text treat all files as text --strip-trailing-cr strip trailing carriage return on input -T, --initial-tab make tabs line up by prepending a tab --diff-program=PROGRAM use PROGRAM to compare files -L, --label=LABEL use LABEL instead of file name (can be repeated up to three times) --help display this help and exit -v, --version output version information and exit The default output format is a somewhat human-readable representation of the changes. The -e, -E, -x, -X (and corresponding long) options cause an ed script to be output instead of the default. Finally, the -m (--merge) option causes diff3 to do the merge internally and output the actual merged file. For unusual input, this is more robust than using ed. If a FILE is '-', read standard input. Exit status is 0 if successful, 1 if conflicts, 2 if trouble. AUTHOR top Written by Randy Smith. REPORTING BUGS top Report bugs to: bug-diffutils@gnu.org GNU diffutils home page: <https://www.gnu.org/software/diffutils/> General help using GNU software: <https://www.gnu.org/gethelp/> COPYRIGHT top Copyright 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top cmp(1), diff(1), sdiff(1) The full documentation for diff3 is maintained as a Texinfo manual. If the info and diff3 programs are properly installed at your site, the command info diff3 should give you access to the complete manual. COLOPHON top This page is part of the diffutils (GNU diff utilities) project. Information about the project can be found at http://savannah.gnu.org/projects/diffutils/. If you have a bug report for this manual page, send it to bug-diffutils@gnu.org. This page was obtained from the project's upstream Git repository git://git.savannah.gnu.org/diffutils.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-09-20.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org diffutils 3.10.207-774b December 2023 DIFF3(1) Pages that refer to this page: cmp(1), diff(1), patch(1), sdiff(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # diff3\n\n> Compare three files line by line.\n> More information: <https://www.gnu.org/software/diffutils/manual/html_node/Invoking-diff3.html>.\n\n- Compare files:\n\n`diff3 {{path/to/file1}} {{path/to/file2}} {{path/to/file3}}`\n\n- Show all changes, outlining conflicts:\n\n`diff3 --show-all {{path/to/file1}} {{path/to/file2}} {{path/to/file3}}`\n |
dig | dig(1): DNS lookup utility - Linux man page dig(1) - Linux man page Name dig - DNS lookup utility Synopsis dig [@server] [-b address] [-c class] [-f filename] [-k filename] [-m] [-p port#] [-q name] [-t type] [-x addr] [-y [hmac:]name:key] [-4] [-6] [name] [type] [class] [queryopt...] dig [-h] dig [global-queryopt...] [query...] Description dig (domain information groper) is a flexible tool for interrogating DNS name servers. It performs DNS lookups and displays the answers that are returned from the name server(s) that were queried. Most DNS administrators use dig to troubleshoot DNS problems because of its flexibility, ease of use and clarity of output. Other lookup tools tend to have less functionality than dig. Although dig is normally used with command-line arguments, it also has a batch mode of operation for reading lookup requests from a file. A brief summary of its command-line arguments and options is printed when the -h option is given. Unlike earlier versions, the BIND 9 implementation of dig allows multiple lookups to be issued from the command line. Unless it is told to query a specific name server, dig will try each of the servers listed in /etc/resolv.conf. When no command line arguments or options are given, dig will perform an NS query for "." (the root). It is possible to set per-user defaults for dig via ${HOME}/.digrc. This file is read and any options in it are applied before the command line arguments. The IN and CH class names overlap with the IN and CH top level domains names. Either use the -t and -c options to specify the type and class, use the -q the specify the domain name, or use "IN." and "CH." when looking up these top level domains. Simple Usage A typical invocation of dig looks like: dig @server name type where: server is the name or IP address of the name server to query. This can be an IPv4 address in dotted-decimal notation or an IPv6 address in colon-delimited notation. When the supplied server argument is a hostname, dig resolves that name before querying that name server. If no server argument is provided, dig consults /etc/resolv.conf and queries the name servers listed there. The reply from the name server that responds is displayed. name is the name of the resource record that is to be looked up. type indicates what type of query is required - ANY, A, MX, SIG, etc. type can be any valid query type. If no type argument is supplied, dig will perform a lookup for an A record. Options The -b option sets the source IP address of the query to address. This must be a valid address on one of the host's network interfaces or "0.0.0.0" or "::". An optional port may be specified by appending "#<port>" The default query class (IN for internet) is overridden by the -c option. class is any valid class, such as HS for Hesiod records or CH for Chaosnet records. The -f option makes dig operate in batch mode by reading a list of lookup requests to process from the file filename. The file contains a number of queries, one per line. Each entry in the file should be organized in the same way they would be presented as queries to dig using the command-line interface. The -m option enables memory usage debugging. If a non-standard port number is to be queried, the -p option is used. port# is the port number that dig will send its queries instead of the standard DNS port number 53. This option would be used to test a name server that has been configured to listen for queries on a non-standard port number. The -4 option forces dig to only use IPv4 query transport. The -6 option forces dig to only use IPv6 query transport. The -t option sets the query type to type. It can be any valid query type which is supported in BIND 9. The default query type is "A", unless the -x option is supplied to indicate a reverse lookup. A zone transfer can be requested by specifying a type of AXFR. When an incremental zone transfer (IXFR) is required, type is set to ixfr=N. The incremental zone transfer will contain the changes made to the zone since the serial number in the zone's SOA record was N. The -q option sets the query name to name. This useful do distinguish the name from other arguments. Reverse lookups - mapping addresses to names - are simplified by the -x option. addr is an IPv4 address in dotted-decimal notation, or a colon-delimited IPv6 address. When this option is used, there is no need to provide the name, class and type arguments. dig automatically performs a lookup for a name like 11.12.13.10.in-addr.arpa and sets the query type and class to PTR and IN respectively. By default, IPv6 addresses are looked up using nibble format under the IP6.ARPA domain. To use the older RFC1886 method using the IP6.INT domain specify the -i option. Bit string labels (RFC2874) are now experimental and are not attempted. To sign the DNS queries sent by dig and their responses using transaction signatures (TSIG), specify a TSIG key file using the -k option. You can also specify the TSIG key itself on the command line using the -y option; hmac is the type of the TSIG, default HMAC-MD5, name is the name of the TSIG key and key is the actual key. The key is a base-64 encoded string, typically generated by dnssec-keygen(8). Caution should be taken when using the -y option on multi-user systems as the key can be visible in the output from ps(1) or in the shell's history file. When using TSIG authentication with dig, the name server that is queried needs to know the key and algorithm that is being used. In BIND, this is done by providing appropriate key and server statements in named.conf. Query Options dig provides a number of query options which affect the way in which lookups are made and the results displayed. Some of these set or reset flag bits in the query header, some determine which sections of the answer get printed, and others determine the timeout and retry strategies. Each query option is identified by a keyword preceded by a plus sign (+). Some keywords set or reset an option. These may be preceded by the string no to negate the meaning of that keyword. Other keywords assign values to options like the timeout interval. They have the form +keyword=value. The query options are: +[no]tcp Use [do not use] TCP when querying name servers. The default behavior is to use UDP unless an AXFR or IXFR query is requested, in which case a TCP connection is used. +[no]vc Use [do not use] TCP when querying name servers. This alternate syntax to +[no]tcp is provided for backwards compatibility. The "vc" stands for "virtual circuit". +[no]ignore Ignore truncation in UDP responses instead of retrying with TCP. By default, TCP retries are performed. +domain=somename Set the search list to contain the single domain somename, as if specified in a domain directive in /etc/resolv.conf, and enable search list processing as if the +search option were given. +[no]search Use [do not use] the search list defined by the searchlist or domain directive in resolv.conf (if any). The search list is not used by default. +[no]showsearch Perform [do not perform] a search showing intermediate results. +[no]defname Deprecated, treated as a synonym for +[no]search +[no]aaonly Sets the "aa" flag in the query. +[no]aaflag A synonym for +[no]aaonly. +[no]adflag Set [do not set] the AD (authentic data) bit in the query. This requests the server to return whether all of the answer and authority sections have all been validated as secure according to the security policy of the server. AD=1 indicates that all records have been validated as secure and the answer is not from a OPT-OUT range. AD=0 indicate that some part of the answer was insecure or not validated. +[no]cdflag Set [do not set] the CD (checking disabled) bit in the query. This requests the server to not perform DNSSEC validation of responses. +[no]cl Display [do not display] the CLASS when printing the record. +[no]ttlid Display [do not display] the TTL when printing the record. +[no]recurse Toggle the setting of the RD (recursion desired) bit in the query. This bit is set by default, which means dig normally sends recursive queries. Recursion is automatically disabled when the +nssearch or +trace query options are used. +[no]nssearch When this option is set, dig attempts to find the authoritative name servers for the zone containing the name being looked up and display the SOA record that each name server has for the zone. +[no]trace Toggle tracing of the delegation path from the root name servers for the name being looked up. Tracing is disabled by default. When tracing is enabled, dig makes iterative queries to resolve the name being looked up. It will follow referrals from the root servers, showing the answer from each server that was used to resolve the lookup. +[no]cmd Toggles the printing of the initial comment in the output identifying the version of dig and the query options that have been applied. This comment is printed by default. +[no]short Provide a terse answer. The default is to print the answer in a verbose form. +[no]identify Show [or do not show] the IP address and port number that supplied the answer when the +short option is enabled. If short form answers are requested, the default is not to show the source address and port number of the server that provided the answer. +[no]comments Toggle the display of comment lines in the output. The default is to print comments. +[no]stats This query option toggles the printing of statistics: when the query was made, the size of the reply and so on. The default behavior is to print the query statistics. +[no]qr Print [do not print] the query as it is sent. By default, the query is not printed. +[no]question Print [do not print] the question section of a query when an answer is returned. The default is to print the question section as a comment. +[no]answer Display [do not display] the answer section of a reply. The default is to display it. +[no]authority Display [do not display] the authority section of a reply. The default is to display it. +[no]additional Display [do not display] the additional section of a reply. The default is to display it. +[no]all Set or clear all display flags. +time=T Sets the timeout for a query to T seconds. The default timeout is 5 seconds. An attempt to set T to less than 1 will result in a query timeout of 1 second being applied. +tries=T Sets the number of times to try UDP queries to server to T instead of the default, 3. If T is less than or equal to zero, the number of tries is silently rounded up to 1. +retry=T Sets the number of times to retry UDP queries to server to T instead of the default, 2. Unlike +tries, this does not include the initial query. +ndots=D Set the number of dots that have to appear in name to D for it to be considered absolute. The default value is that defined using the ndots statement in /etc/resolv.conf, or 1 if no ndots statement is present. Names with fewer dots are interpreted as relative names and will be searched for in the domains listed in the search or domain directive in /etc/resolv.conf. +bufsize=B Set the UDP message buffer size advertised using EDNS0 to B bytes. The maximum and minimum sizes of this buffer are 65535 and 0 respectively. Values outside this range are rounded up or down appropriately. Values other than zero will cause a EDNS query to be sent. +edns=# Specify the EDNS version to query with. Valid values are 0 to 255. Setting the EDNS version will cause a EDNS query to be sent. +noedns clears the remembered EDNS version. +[no]multiline Print records like the SOA records in a verbose multi-line format with human-readable comments. The default is to print each record on a single line, to facilitate machine parsing of the dig output. +[no]onesoa Print only one (starting) SOA record when performing an AXFR. The default is to print both the starting and ending SOA records. +[no]fail Do not try the next server if you receive a SERVFAIL. The default is to not try the next server which is the reverse of normal stub resolver behavior. +[no]besteffort Attempt to display the contents of messages which are malformed. The default is to not display malformed answers. +[no]dnssec Requests DNSSEC records be sent by setting the DNSSEC OK bit (DO) in the OPT record in the additional section of the query. +[no]sigchase Chase DNSSEC signature chains. Requires dig be compiled with -DDIG_SIGCHASE. +trusted-key=#### Specifies a file containing trusted keys to be used with +sigchase. Each DNSKEY record must be on its own line. If not specified, dig will look for /etc/trusted-key.key then trusted-key.key in the current directory. Requires dig be compiled with -DDIG_SIGCHASE. +[no]topdown When chasing DNSSEC signature chains perform a top-down validation. Requires dig be compiled with -DDIG_SIGCHASE. +[no]nsid Include an EDNS name server ID request when sending a query. Multiple Queries The BIND 9 implementation of dig supports specifying multiple queries on the command line (in addition to supporting the -f batch file option). Each of those queries can be supplied with its own set of flags, options and query options. In this case, each query argument represent an individual query in the command-line syntax described above. Each consists of any of the standard options and flags, the name to be looked up, an optional query type and class and any query options that should be applied to that query. A global set of query options, which should be applied to all queries, can also be supplied. These global query options must precede the first tuple of name, class, type, options, flags, and query options supplied on the command line. Any global query options (except the +[no]cmd option) can be overridden by a query-specific set of query options. For example: dig +qr www.isc.org any -x 127.0.0.1 isc.org ns +noqr shows how dig could be used from the command line to make three lookups: an ANY query for www.isc.org, a reverse lookup of 127.0.0.1 and a query for the NS records of isc.org. A global query option of +qr is applied, so that dig shows the initial query it made for each lookup. The final query has a local query option of +noqr which means that dig will not print the initial query when it looks up the NS records for isc.org. Idn Support If dig has been built with IDN (internationalized domain name) support, it can accept and display non-ASCII domain names. dig appropriately converts character encoding of domain name before sending a request to DNS server or displaying a reply from the server. If you'd like to turn off the IDN support for some reason, defines the IDN_DISABLE environment variable. The IDN support is disabled if the variable is set when dig runs. Return Codes Dig return codes are: 0: Everything went well, including things like NXDOMAIN 1: Usage error 8: Couldn't open batch file 9: No reply from server 10: Internal error Files /etc/resolv.conf ${HOME}/.digrc See Also host(1), named(8), dnssec-keygen(8), RFC1035. Bugs There are probably too many query options. Copyright Copyright 2004-2010 Internet Systems Consortium, Inc. ("ISC") Copyright 2000-2003 Internet Software Consortium. Referenced By dnsget(1), dnstracer(8), drill(1), nslookup(1), strobe(1), zonecheck(1) Site Search Library linux docs linux man pages page load time Toys world sunlight moon phase trace explorer | # dig\n\n> DNS lookup utility.\n> More information: <https://manned.org/dig>.\n\n- Lookup the IP(s) associated with a hostname (A records):\n\n`dig +short {{example.com}}`\n\n- Get a detailed answer for a given domain (A records):\n\n`dig +noall +answer {{example.com}}`\n\n- Query a specific DNS record type associated with a given domain name:\n\n`dig +short {{example.com}} {{A|MX|TXT|CNAME|NS}}`\n\n- Specify an alternate DNS server to query and optionally use DNS over TLS (DoT):\n\n`dig {{+tls}} @{{1.1.1.1|8.8.8.8|9.9.9.9|...}} {{example.com}}`\n\n- Perform a reverse DNS lookup on an IP address (PTR record):\n\n`dig -x {{8.8.8.8}}`\n\n- Find authoritative name servers for the zone and display SOA records:\n\n`dig +nssearch {{example.com}}`\n\n- Perform iterative queries and display the entire trace path to resolve a domain name:\n\n`dig +trace {{example.com}}`\n\n- Query a DNS server over a non-standard [p]ort using the TCP protocol:\n\n`dig +tcp -p {{port}} @{{dns_server_ip}} {{example.com}}`\n |
dir | dir(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training dir(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | AUTHOR | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON DIR(1) User Commands DIR(1) NAME top dir - list directory contents SYNOPSIS top dir [OPTION]... [FILE]... DESCRIPTION top List information about the FILEs (the current directory by default). Sort entries alphabetically if none of -cftuvSUX nor --sort is specified. Mandatory arguments to long options are mandatory for short options too. -a, --all do not ignore entries starting with . -A, --almost-all do not list implied . and .. --author with -l, print the author of each file -b, --escape print C-style escapes for nongraphic characters --block-size=SIZE with -l, scale sizes by SIZE when printing them; e.g., '--block-size=M'; see SIZE format below -B, --ignore-backups do not list implied entries ending with ~ -c with -lt: sort by, and show, ctime (time of last change of file status information); with -l: show ctime and sort by name; otherwise: sort by ctime, newest first -C list entries by columns --color[=WHEN] color the output WHEN; more info below -d, --directory list directories themselves, not their contents -D, --dired generate output designed for Emacs' dired mode -f list all entries in directory order -F, --classify[=WHEN] append indicator (one of */=>@|) to entries WHEN --file-type likewise, except do not append '*' --format=WORD across -x, commas -m, horizontal -x, long -l, single-column -1, verbose -l, vertical -C --full-time like -l --time-style=full-iso -g like -l, but do not list owner --group-directories-first group directories before files; can be augmented with a --sort option, but any use of --sort=none (-U) disables grouping -G, --no-group in a long listing, don't print group names -h, --human-readable with -l and -s, print sizes like 1K 234M 2G etc. --si likewise, but use powers of 1000 not 1024 -H, --dereference-command-line follow symbolic links listed on the command line --dereference-command-line-symlink-to-dir follow each command line symbolic link that points to a directory --hide=PATTERN do not list implied entries matching shell PATTERN (overridden by -a or -A) --hyperlink[=WHEN] hyperlink file names WHEN --indicator-style=WORD append indicator with style WORD to entry names: none (default), slash (-p), file-type (--file-type), classify (-F) -i, --inode print the index number of each file -I, --ignore=PATTERN do not list implied entries matching shell PATTERN -k, --kibibytes default to 1024-byte blocks for file system usage; used only with -s and per directory totals -l use a long listing format -L, --dereference when showing file information for a symbolic link, show information for the file the link references rather than for the link itself -m fill width with a comma separated list of entries -n, --numeric-uid-gid like -l, but list numeric user and group IDs -N, --literal print entry names without quoting -o like -l, but do not list group information -p, --indicator-style=slash append / indicator to directories -q, --hide-control-chars print ? instead of nongraphic characters --show-control-chars show nongraphic characters as-is (the default, unless program is 'ls' and output is a terminal) -Q, --quote-name enclose entry names in double quotes --quoting-style=WORD use quoting style WORD for entry names: literal, locale, shell, shell-always, shell-escape, shell-escape-always, c, escape (overrides QUOTING_STYLE environment variable) -r, --reverse reverse order while sorting -R, --recursive list subdirectories recursively -s, --size print the allocated size of each file, in blocks -S sort by file size, largest first --sort=WORD sort by WORD instead of name: none (-U), size (-S), time (-t), version (-v), extension (-X), width --time=WORD select which timestamp used to display or sort; access time (-u): atime, access, use; metadata change time (-c): ctime, status; modified time (default): mtime, modification; birth time: birth, creation; with -l, WORD determines which time to show; with --sort=time, sort by WORD (newest first) --time-style=TIME_STYLE time/date format with -l; see TIME_STYLE below -t sort by time, newest first; see --time -T, --tabsize=COLS assume tab stops at each COLS instead of 8 -u with -lt: sort by, and show, access time; with -l: show access time and sort by name; otherwise: sort by access time, newest first -U do not sort; list entries in directory order -v natural sort of (version) numbers within text -w, --width=COLS set output width to COLS. 0 means no limit -x list entries by lines instead of by columns -X sort alphabetically by entry extension -Z, --context print any security context of each file --zero end each output line with NUL, not newline -1 list one file per line --help display this help and exit --version output version information and exit The SIZE argument is an integer and optional unit (example: 10K is 10*1024). Units are K,M,G,T,P,E,Z,Y,R,Q (powers of 1024) or KB,MB,... (powers of 1000). Binary prefixes can be used, too: KiB=K, MiB=M, and so on. The TIME_STYLE argument can be full-iso, long-iso, iso, locale, or +FORMAT. FORMAT is interpreted like in date(1). If FORMAT is FORMAT1<newline>FORMAT2, then FORMAT1 applies to non-recent files and FORMAT2 to recent files. TIME_STYLE prefixed with 'posix-' takes effect only outside the POSIX locale. Also the TIME_STYLE environment variable sets the default style to use. The WHEN argument defaults to 'always' and can also be 'auto' or 'never'. Using color to distinguish file types is disabled both by default and with --color=never. With --color=auto, ls emits color codes only when standard output is connected to a terminal. The LS_COLORS environment variable can change the settings. Use the dircolors(1) command to set it. Exit status: 0 if OK, 1 if minor problems (e.g., cannot access subdirectory), 2 if serious trouble (e.g., cannot access command-line argument). AUTHOR top Written by Richard M. Stallman and David MacKenzie. REPORTING BUGS top GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT top Copyright 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top Full documentation <https://www.gnu.org/software/coreutils/dir> or available locally via: info '(coreutils) dir invocation' COLOPHON top This page is part of the coreutils (basic file, shell and text manipulation utilities) project. Information about the project can be found at http://www.gnu.org/software/coreutils/. If you have a bug report for this manual page, see http://www.gnu.org/software/coreutils/. This page was obtained from the tarball coreutils-9.4.tar.xz fetched from http://ftp.gnu.org/gnu/coreutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU coreutils 9.4 August 2023 DIR(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # dir\n\n> List directory contents using one line per file, special characters are represented by backslash escape sequences.\n> Works as `ls -C --escape`.\n> More information: <https://manned.org/dir>.\n\n- List all files, including hidden files:\n\n`dir -all`\n\n- List files including their author (`-l` is required):\n\n`dir -l --author`\n\n- List files excluding those that match a specified blob pattern:\n\n`dir --hide={{pattern}}`\n\n- List subdirectories recursively:\n\n`dir --recursive`\n\n- Display help:\n\n`dir --help`\n |
dircolors | dircolors(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training dircolors(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | AUTHOR | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON DIRCOLORS(1) User Commands DIRCOLORS(1) NAME top dircolors - color setup for ls SYNOPSIS top dircolors [OPTION]... [FILE] DESCRIPTION top Output commands to set the LS_COLORS environment variable. Determine format of output: -b, --sh, --bourne-shell output Bourne shell code to set LS_COLORS -c, --csh, --c-shell output C shell code to set LS_COLORS -p, --print-database output defaults --print-ls-colors output fully escaped colors for display --help display this help and exit --version output version information and exit If FILE is specified, read it to determine which colors to use for which file types and extensions. Otherwise, a precompiled database is used. For details on the format of these files, run 'dircolors --print-database'. AUTHOR top Written by H. Peter Anvin. REPORTING BUGS top GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT top Copyright 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top Full documentation <https://www.gnu.org/software/coreutils/dircolors> or available locally via: info '(coreutils) dircolors invocation' COLOPHON top This page is part of the coreutils (basic file, shell and text manipulation utilities) project. Information about the project can be found at http://www.gnu.org/software/coreutils/. If you have a bug report for this manual page, see http://www.gnu.org/software/coreutils/. This page was obtained from the tarball coreutils-9.4.tar.xz fetched from http://ftp.gnu.org/gnu/coreutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU coreutils 9.4 August 2023 DIRCOLORS(1) Pages that refer to this page: dir(1), ls(1), vdir(1), dir_colors(5) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # dircolors\n\n> Output commands to set the LS_COLOR environment variable and style `ls`, `dir`, etc.\n> More information: <https://www.gnu.org/software/coreutils/dircolors>.\n\n- Output commands to set LS_COLOR using default colors:\n\n`dircolors`\n\n- Output commands to set LS_COLOR using colors from a file:\n\n`dircolors {{path/to/file}}`\n\n- Output commands for Bourne shell:\n\n`dircolors --bourne-shell`\n\n- Output commands for C shell:\n\n`dircolors --c-shell`\n\n- View the default colors for file types and extensions:\n\n`dircolors --print-data`\n |
dirname | dirname(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training dirname(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | EXAMPLES | AUTHOR | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON DIRNAME(1) User Commands DIRNAME(1) NAME top dirname - strip last component from file name SYNOPSIS top dirname [OPTION] NAME... DESCRIPTION top Output each NAME with its last non-slash component and trailing slashes removed; if NAME contains no /'s, output '.' (meaning the current directory). -z, --zero end each output line with NUL, not newline --help display this help and exit --version output version information and exit EXAMPLES top dirname /usr/bin/ -> "/usr" dirname dir1/str dir2/str -> "dir1" followed by "dir2" dirname stdio.h -> "." AUTHOR top Written by David MacKenzie and Jim Meyering. REPORTING BUGS top GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT top Copyright 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top basename(1), readlink(1) Full documentation <https://www.gnu.org/software/coreutils/dirname> or available locally via: info '(coreutils) dirname invocation' COLOPHON top This page is part of the coreutils (basic file, shell and text manipulation utilities) project. Information about the project can be found at http://www.gnu.org/software/coreutils/. If you have a bug report for this manual page, see http://www.gnu.org/software/coreutils/. This page was obtained from the tarball coreutils-9.4.tar.xz fetched from http://ftp.gnu.org/gnu/coreutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU coreutils 9.4 August 2023 DIRNAME(1) Pages that refer to this page: basename(1), basename(3) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # dirname\n\n> Calculates the parent directory of a file or directory path.\n> More information: <https://www.gnu.org/software/coreutils/dirname>.\n\n- Calculate the parent directory of a given path:\n\n`dirname {{path/to/file_or_directory}}`\n\n- Calculate the parent directory of multiple paths:\n\n`dirname {{path/to/file_or_directory1 path/to/file_or_directory2 ...}}`\n\n- Delimit output with a NUL character instead of a newline (useful when combining with `xargs`):\n\n`dirname --zero {{path/to/file_or_directory1 path/to/file_or_directory2 ...}}`\n |
dmesg | dmesg(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training dmesg(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | COLORS | EXIT STATUS | AUTHORS | SEE ALSO | REPORTING BUGS | AVAILABILITY DMESG(1) User Commands DMESG(1) NAME top dmesg - print or control the kernel ring buffer SYNOPSIS top dmesg [options] dmesg --clear dmesg --read-clear [options] dmesg --console-level level dmesg --console-on dmesg --console-off DESCRIPTION top dmesg is used to examine or control the kernel ring buffer. The default action is to display all messages from the kernel ring buffer. OPTIONS top The --clear, --read-clear, --console-on, --console-off, and --console-level options are mutually exclusive. -C, --clear Clear the ring buffer. -c, --read-clear Clear the ring buffer after first printing its contents. -D, --console-off Disable the printing of messages to the console. -d, --show-delta Display the timestamp and the time delta spent between messages. If used together with --notime then only the time delta without the timestamp is printed. -E, --console-on Enable printing messages to the console. -e, --reltime Display the local time and the delta in human-readable format. Be aware that conversion to the local time could be inaccurate (see -T for more details). -F, --file file Read the syslog messages from the given file. Note that -F does not support messages in kmsg format. See -K instead. -f, --facility list Restrict output to the given (comma-separated) list of facilities. For example: dmesg --facility=daemon will print messages from system daemons only. For all supported facilities see the --help output. -H, --human Enable human-readable output. See also --color, --reltime and --nopager. -J, --json Use JSON output format. The time output format is in "sec.usec" format only, log priority level is not decoded by default (use --decode to split into facility and priority), the other options to control the output format or time format are silently ignored. -K, --kmsg-file file Read the /dev/kmsg messages from the given file. Different record as expected to be separated by a NULL byte. -k, --kernel Print kernel messages. -L, --color[=when] Colorize the output. The optional argument when can be auto, never or always. If the when argument is omitted, it defaults to auto. The colors can be disabled; for the current built-in default see the --help output. See also the COLORS section below. -l, --level list Restrict output to the given (comma-separated) list of levels. For example: dmesg --level=err,warn will print error and warning messages only. For all supported levels see the --help output. Appending a plus + to a level name also includes all higher levels. For example: dmesg --level=err+ will print levels err, crit, alert and emerg. Prepending it will include all lower levels. -n, --console-level level Set the level at which printing of messages is done to the console. The level is a level number or abbreviation of the level name. For all supported levels see the --help output. For example, -n 1 or -n emerg prevents all messages, except emergency (panic) messages, from appearing on the console. All levels of messages are still written to /proc/kmsg, so syslogd(8) can still be used to control exactly where kernel messages appear. When the -n option is used, dmesg will not print or clear the kernel ring buffer. --noescape The unprintable and potentially unsafe characters (e.g., broken multi-byte sequences, terminal controlling chars, etc.) are escaped in format \x<hex> for security reason by default. This option disables this feature at all. Its usable for example for debugging purpose together with --raw. Be careful and dont use it by default. -P, --nopager Do not pipe output into a pager. A pager is enabled by default for --human output. -p, --force-prefix Add facility, level or timestamp information to each line of a multi-line message. -r, --raw Print the raw message buffer, i.e., do not strip the log-level prefixes, but all unprintable characters are still escaped (see also --noescape). Note that the real raw format depends on the method how dmesg reads kernel messages. The /dev/kmsg device uses a different format than syslog(2). For backward compatibility, dmesg returns data always in the syslog(2) format. It is possible to read the real raw data from /dev/kmsg by, for example, the command 'dd if=/dev/kmsg iflag=nonblock'. -S, --syslog Force dmesg to use the syslog(2) kernel interface to read kernel messages. The default is to use /dev/kmsg rather than syslog(2) since kernel 3.5.0. -s, --buffer-size size Use a buffer of size to query the kernel ring buffer. This is 16392 by default. (The default kernel syslog buffer size was 4096 at first, 8192 since 1.3.54, 16384 since 2.1.113.) If you have set the kernel buffer to be larger than the default, then this option can be used to view the entire buffer. -T, --ctime Print human-readable timestamps. Be aware that the timestamp could be inaccurate! The time source used for the logs is not updated after system SUSPEND/RESUME. Timestamps are adjusted according to current delta between boottime and monotonic clocks, this works only for messages printed after last resume. --since time Display record since the specified time. Supported is the subsecond granularity. The time is possible to specify in absolute way as well as by relative notation (e.g. '1 hour ago'). Be aware that the timestamp could be inaccurate and see --ctime for more details. --until time Display record until the specified time. Supported is the subsecond granularity. The time is possible to specify in absolute way as well as by relative notation (e.g. '1 hour ago'). Be aware that the timestamp could be inaccurate and see --ctime for more details. -t, --notime Do not print kernels timestamps. --time-format format Print timestamps using the given format, which can be ctime, reltime, delta or iso. The first three formats are aliases of the time-format-specific options. The iso format is a dmesg implementation of the ISO-8601 timestamp format. The purpose of this format is to make the comparing of timestamps between two systems, and any other parsing, easy. The definition of the iso timestamp is: YYYY-MM-DD<T>HH:MM:SS,<microseconds>+><timezone offset from UTC>. The iso format has the same issue as ctime: the time may be inaccurate when a system is suspended and resumed. -u, --userspace Print userspace messages. -w, --follow Wait for new messages. This feature is supported only on systems with a readable /dev/kmsg (since kernel 3.5.0). -W, --follow-new Wait and print only new messages. -x, --decode Decode facility and level (priority) numbers to human-readable prefixes. -h, --help Display help text and exit. -V, --version Print version and exit. COLORS top The output colorization is implemented by terminal-colors.d(5) functionality. Implicit coloring can be disabled by an empty file /etc/terminal-colors.d/dmesg.disable for the dmesg command or for all tools by /etc/terminal-colors.d/disable The user-specific $XDG_CONFIG_HOME/terminal-colors.d or $HOME/.config/terminal-colors.d overrides the global setting. Note that the output colorization may be enabled by default, and in this case terminal-colors.d directories do not have to exist yet. The logical color names supported by dmesg are: subsys The message sub-system prefix (e.g., "ACPI:"). time The message timestamp. timebreak The message timestamp in short ctime format in --reltime or --human output. alert The text of the message with the alert log priority. crit The text of the message with the critical log priority. err The text of the message with the error log priority. warn The text of the message with the warning log priority. segfault The text of the message that inform about segmentation fault. EXIT STATUS top dmesg can fail reporting permission denied error. This is usually caused by dmesg_restrict kernel setting, please see syslog(2) for more details. AUTHORS top Karel Zak <kzak@redhat.com> dmesg was originally written by Theodore Tso <tytso@athena.mit.edu>. SEE ALSO top terminal-colors.d(5), syslogd(8) REPORTING BUGS top For bug reports, use the issue tracker at https://github.com/util-linux/util-linux/issues. AVAILABILITY top The dmesg command is part of the util-linux package which can be downloaded from Linux Kernel Archive <https://www.kernel.org/pub/linux/utils/util-linux/>. This page is part of the util-linux (a random collection of Linux utilities) project. Information about the project can be found at https://www.kernel.org/pub/linux/utils/util-linux/. If you have a bug report for this manual page, send it to util-linux@vger.kernel.org. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/utils/util-linux/util-linux.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-14.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org util-linux 2.39.1041-8a7c 2023-12-22 DMESG(1) Pages that refer to this page: syslog(2), proc(5), systemd.exec(5), terminal-colors.d(5), babeltrace2-plugin-text(7), babeltrace2-source.text.dmesg(7), iptables-extensions(8), tc-bpf(8), wg(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # dmesg\n\n> Write the kernel messages to `stdout`.\n> More information: <https://manned.org/dmesg>.\n\n- Show kernel messages:\n\n`dmesg`\n\n- Show kernel error messages:\n\n`dmesg --level err`\n\n- Show kernel messages and keep reading new ones, similar to `tail -f` (available in kernels 3.5.0 and newer):\n\n`dmesg -w`\n\n- Show how much physical memory is available on this system:\n\n`dmesg | grep -i memory`\n\n- Show kernel messages 1 page at a time:\n\n`dmesg | less`\n\n- Show kernel messages with a timestamp (available in kernels 3.5.0 and newer):\n\n`dmesg -T`\n\n- Show kernel messages in human-readable form (available in kernels 3.5.0 and newer):\n\n`dmesg -H`\n\n- Colorize output (available in kernels 3.5.0 and newer):\n\n`dmesg -L`\n |
dnf | dnf(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training dnf(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | COMMANDS | SPECIFYING PACKAGES | SPECIFYING PROVIDES | SPECIFYING GROUPS | SPECIFYING MODULES | SPECIFYING TRANSACTIONS | PACKAGE FILTERING | METADATA SYNCHRONIZATION | CONFIGURATION FILES REPLACEMENT POLICY | FILES | SEE ALSO | AUTHOR | COPYRIGHT | COLOPHON DNF(8) DNF DNF(8) NAME top dnf - DNF Command Reference SYNOPSIS top dnf [options] <command> [<args>...] DESCRIPTION top DNF is the next upcoming major version of YUM, a package manager for RPM-based Linux distributions. It roughly maintains CLI compatibility with YUM and defines a strict API for extensions and plugins. Plugins can modify or extend features of DNF or provide additional CLI commands on top of those mentioned below. If you know the name of such a command (including commands mentioned below), you may find/install the package which provides it using the appropriate virtual provide in the form of dnf-command(<alias>), where <alias> is the name of the command; e.g.``dnf install 'dnf-command(versionlock)'`` installs a versionlock plugin. This approach also applies to specifying dependencies of packages that require a particular DNF command. Return values: 0 : Operation was successful. 1 : An error occurred, which was handled by dnf. 3 : An unknown unhandled error occurred during operation. 100: See check-update 200: There was a problem with acquiring or releasing of locks. Available commands: alias autoremove check check-update clean deplist distro-sync downgrade group help history info install list makecache mark module provides reinstall remove repoinfo repolist repoquery repository-packages search shell swap updateinfo upgrade upgrade-minimal Additional information: Options Specifying Packages Specifying Provides Specifying File Provides Specifying Groups Specifying Transactions Metadata Synchronization Configuration Files Replacement Policy Files See Also OPTIONS top -4 Resolve to IPv4 addresses only. -6 Resolve to IPv6 addresses only. --advisory=<advisory>, --advisories=<advisory> Include packages corresponding to the advisory ID, Eg. FEDORA-2201-123. Applicable for the install, repoquery, updateinfo, upgrade and offline-upgrade (dnf-plugins-core) commands. --allowerasing Allow erasing of installed packages to resolve dependencies. This option could be used as an alternative to the yum swap command where packages to remove are not explicitly defined. --assumeno Automatically answer no for all questions. -b, --best Try the best available package versions in transactions. Specifically during dnf upgrade, which by default skips over updates that can not be installed for dependency reasons, the switch forces DNF to only consider the latest packages. When running into packages with broken dependencies, DNF will fail giving a reason why the latest version can not be installed. Note that the use of the newest available version is only guaranteed for the packages directly requested (e.g. as a command line arguments), and the solver may use older versions of dependencies to meet their requirements. --bugfix Include packages that fix a bugfix issue. Applicable for the install, repoquery, updateinfo, upgrade and offline-upgrade (dnf-plugins-core) commands. --bz=<bugzilla>, --bzs=<bugzilla> Include packages that fix a Bugzilla ID, Eg. 123123. Applicable for the install, repoquery, updateinfo, upgrade and offline-upgrade (dnf-plugins-core) commands. -C, --cacheonly Run entirely from system cache, don't update the cache and use it even in case it is expired. DNF uses a separate cache for each user under which it executes. The cache for the root user is called the system cache. This switch allows a regular user read-only access to the system cache, which usually is more fresh than the user's and thus he does not have to wait for metadata sync. --color=<color> Control whether color is used in terminal output. Valid values are always, never and auto (default). --comment=<comment> Add a comment to the transaction history. -c <config file>, --config=<config file> Configuration file location. --cve=<cves>, --cves=<cves> Include packages that fix a CVE (Common Vulnerabilities and Exposures) ID (http://cve.mitre.org/about/ ), Eg. CVE-2201-0123. Applicable for the install, repoquery, updateinfo, upgrade and offline-upgrade (dnf-plugins-core) commands. -d <debug level>, --debuglevel=<debug level> Debugging output level. This is an integer value between 0 (no additional information strings) and 10 (shows all debugging information, even that not understandable to the user), default is 2. Deprecated, use -v instead. --debugsolver Dump data aiding in dependency solver debugging into ./debugdata. --disableexcludes=[all|main|<repoid>], --disableexcludepkgs=[all|main|<repoid>] Disable the configuration file excludes. Takes one of the following three options: all, disables all configuration file excludes main, disables excludes defined in the [main] section repoid, disables excludes defined for the given repository --disable, --set-disabled Disable specified repositories (automatically saves). The option has to be used together with the config-manager command (dnf-plugins-core). --disableplugin=<plugin names> Disable the listed plugins specified by names or globs. --disablerepo=<repoid> Temporarily disable active repositories for the purpose of the current dnf command. Accepts an id, a comma-separated list of ids, or a glob of ids. This option can be specified multiple times, but is mutually exclusive with --repo. --downloaddir=<path>, --destdir=<path> Redirect downloaded packages to provided directory. The option has to be used together with the --downloadonly command line option, with the download, modulesync, reposync or system-upgrade commands (dnf-plugins-core). --downloadonly Download the resolved package set without performing any rpm transaction (install/upgrade/erase). Packages are removed after the next successful transaction. This applies also when used together with --destdir option as the directory is considered as a part of the DNF cache. To persist the packages, use the download command instead. -e <error level>, --errorlevel=<error level> Error output level. This is an integer value between 0 (no error output) and 10 (shows all error messages), default is 3. Deprecated, use -v instead. --enable, --set-enabled Enable specified repositories (automatically saves). The option has to be used together with the config-manager command (dnf-plugins-core). --enableplugin=<plugin names> Enable the listed plugins specified by names or globs. --enablerepo=<repoid> Temporarily enable additional repositories for the purpose of the current dnf command. Accepts an id, a comma-separated list of ids, or a glob of ids. This option can be specified multiple times. --enhancement Include enhancement relevant packages. Applicable for the install, repoquery, updateinfo, upgrade and offline-upgrade (dnf-plugins-core) commands. -x <package-file-spec>, --exclude=<package-file-spec> Exclude packages specified by <package-file-spec> from the operation. --excludepkgs=<package-file-spec> Deprecated option. It was replaced by the --exclude option. --forcearch=<arch> Force the use of an architecture. Any architecture can be specified. However, use of an architecture not supported natively by your CPU will require emulation of some kind. This is usually through QEMU. The behavior of --forcearch can be configured by using the arch and ignorearch configuration options with values <arch> and True respectively. -h, --help, --help-cmd Show the help. --installroot=<path> Specifies an alternative installroot, relative to where all packages will be installed. Think of this like doing chroot <root> dnf, except using --installroot allows dnf to work before the chroot is created. It requires absolute path. cachedir, log files, releasever, and gpgkey are taken from or stored in the installroot. Gpgkeys are imported into the installroot from a path relative to the host which can be specified in the repository section of configuration files. configuration file and reposdir are searched inside the installroot first. If they are not present, they are taken from the host system. Note: When a path is specified within a command line argument (--config=<config file> in case of configuration file and --setopt=reposdir=<reposdir> for reposdir) then this path is always relative to the host with no exceptions. vars are taken from the host system or installroot according to reposdir . When reposdir path is specified within a command line argument, vars are taken from the installroot. When varsdir paths are specified within a command line argument (--setopt=varsdir=<reposdir>) then those path are always relative to the host with no exceptions. The pluginpath and pluginconfpath are relative to the host. Note: You may also want to use the command-line option --releasever=<release> when creating the installroot, otherwise the $releasever value is taken from the rpmdb within the installroot (and thus it is empty at the time of creation and the transaction will fail). If --releasever=/ is used, the releasever will be detected from the host (/) system. The new installroot path at the time of creation does not contain the repository, releasever and dnf.conf files. On a modular system you may also want to use the --setopt=module_platform_id=<module_platform_name:stream> command-line option when creating the installroot, otherwise the module_platform_id value will be taken from the /etc/os-release file within the installroot (and thus it will be empty at the time of creation, the modular dependency could be unsatisfied and modules content could be excluded). Installroot examples: dnf --installroot=<installroot> --releasever=<release> install system-release Permanently sets the releasever of the system in the <installroot> directory to <release>. dnf --installroot=<installroot> --setopt=reposdir=<path> --config /path/dnf.conf upgrade Upgrades packages inside the installroot from a repository described by --setopt using configuration from /path/dnf.conf. --newpackage Include newpackage relevant packages. Applicable for the install, repoquery, updateinfo, upgrade and offline-upgrade (dnf-plugins-core) commands. --noautoremove Disable removal of dependencies that are no longer used. It sets clean_requirements_on_remove configuration option to False. --nobest Set best option to False, so that transactions are not limited to best candidates only. --nodocs Do not install documentation. Sets the rpm flag 'RPMTRANS_FLAG_NODOCS'. --nogpgcheck Skip checking GPG signatures on packages (if RPM policy allows). --noplugins Disable all plugins. --obsoletes This option has an effect on an install/update, it enables dnf's obsoletes processing logic. For more information see the obsoletes option. This option also displays capabilities that the package obsoletes when used together with the repoquery command. Configuration Option: obsoletes -q, --quiet In combination with a non-interactive command, shows just the relevant content. Suppresses messages notifying about the current state or actions of DNF. -R <minutes>, --randomwait=<minutes> Maximum command wait time. --refresh Set metadata as expired before running the command. --releasever=<release> Configure DNF as if the distribution release was <release>. This can affect cache paths, values in configuration files and mirrorlist URLs. --repofrompath <repo>,<path/url> Specify a repository to add to the repositories for this query. This option can be used multiple times. The repository label is specified by <repo>. The path or url to the repository is specified by <path/url>. It is the same path as a baseurl and can be also enriched by the repo variables. The configuration for the repository can be adjusted using - -setopt=<repo>.<option>=<value>. If you want to view only packages from this repository, combine this with the --repo=<repo> or --disablerepo="*" switches. --repo=<repoid>, --repoid=<repoid> Enable just specific repositories by an id or a glob. Can be used multiple times with accumulative effect. It is basically a shortcut for --disablerepo="*" --enablerepo=<repoid> and is mutually exclusive with the --disablerepo option. --rpmverbosity=<name> RPM debug scriptlet output level. Sets the debug level to <name> for RPM scriptlets. For available levels, see the rpmverbosity configuration option. --sec-severity=<severity>, --secseverity=<severity> Includes packages that provide a fix for an issue of the specified severity. Applicable for the install, repoquery, updateinfo, upgrade and offline-upgrade (dnf-plugins-core) commands. --security Includes packages that provide a fix for a security issue. Applicable for the install, repoquery, updateinfo, upgrade and offline-upgrade (dnf-plugins-core) commands. --setopt=<option>=<value> Override a configuration option from the configuration file. To override configuration options for repositories, use repoid.option for the <option>. Values for configuration options like excludepkgs, includepkgs, installonlypkgs and tsflags are appended to the original value, they do not override it. However, specifying an empty value (e.g. --setopt=tsflags=) will clear the option. --skip-broken Resolve depsolve problems by removing packages that are causing problems from the transaction. It is an alias for the strict configuration option with value False. Additionally, with the enable and disable module subcommands it allows one to perform an action even in case of broken modular dependencies. --showduplicates Show duplicate packages in repositories. Applicable for the list and search commands. -v, --verbose Verbose operation, show debug messages. --version Show DNF version and exit. -y, --assumeyes Automatically answer yes for all questions. List options are comma-separated. Command-line options override respective settings from configuration files. COMMANDS top For an explanation of <package-spec>, <package-file-spec> and <package-name-spec> see Specifying Packages. For an explanation of <provide-spec> see Specifying Provides. For an explanation of <group-spec> see Specifying Groups. For an explanation of <module-spec> see Specifying Modules. For an explanation of <transaction-spec> see Specifying Transactions. Alias Command Command: alias Allows the user to define and manage a list of aliases (in the form <name=value>), which can be then used as dnf commands to abbreviate longer command sequences. For examples on using the alias command, see Alias Examples. For examples on the alias processing, see Alias Processing Examples. To use an alias (name=value), the name must be placed as the first "command" (e.g. the first argument that is not an option). It is then replaced by its value and the resulting sequence is again searched for aliases. The alias processing stops when the first found command is not a name of any alias. In case the processing would result in an infinite recursion, the original arguments are used instead. Also, like in shell aliases, if the result starts with a \, the alias processing will stop. All aliases are defined in configuration files in the /etc/dnf/aliases.d/ directory in the [aliases] section, and aliases created by the alias command are written to the USER.conf file. In case of conflicts, the USER.conf has the highest priority, and alphabetical ordering is used for the rest of the configuration files. Optionally, there is the enabled option in the [main] section defaulting to True. This can be set for each file separately in the respective file, or globally for all aliases in the ALIASES.conf file. dnf alias [options] [list] [<name>...] List aliases with their final result. The [<alias>...] parameter further limits the result to only those aliases matching it. dnf alias [options] add <name=value>... Create new aliases. dnf alias [options] delete <name>... Delete aliases. Alias Examples dnf alias list Lists all defined aliases. dnf alias add rm=remove Adds a new command alias called rm which works the same as the remove command. dnf alias add upgrade="\upgrade --skip-broken --disableexcludes=all --obsoletes" Adds a new command alias called upgrade which works the same as the upgrade command, with additional options. Note that the original upgrade command is prefixed with a \ to prevent an infinite loop in alias processing. Alias Processing Examples If there are defined aliases in=install and FORCE="--skip-broken --disableexcludes=all": dnf FORCE in will be replaced with dnf --skip-broken --disableexcludes=all install dnf in FORCE will be replaced with dnf install FORCE (which will fail) If there is defined alias in=install: dnf in will be replaced with dnf install dnf --repo updates in will be replaced with dnf --repo updates in (which will fail) Autoremove Command Command: autoremove Aliases for explicit NEVRA matching: autoremove-n, autoremove-na, autoremove-nevra dnf [options] autoremove Removes all "leaf" packages from the system that were originally installed as dependencies of user-installed packages, but which are no longer required by any such package. Packages listed in installonlypkgs are never automatically removed by this command. dnf [options] autoremove <spec>... This is an alias for the Remove Command command with clean_requirements_on_remove set to True. It removes the specified packages from the system along with any packages depending on the packages being removed. Each <spec> can be either a <package-spec>, which specifies a package directly, or a @<group-spec>, which specifies an (environment) group which contains it. It also removes any dependencies that are no longer needed. There are also a few specific autoremove commands autoremove-n, autoremove-na and autoremove-nevra that allow the specification of an exact argument in the NEVRA (name-epoch:version-release.architecture) format. This command by default does not force a sync of expired metadata. See also Metadata Synchronization. Check Command Command: check dnf [options] check [--dependencies] [--duplicates] [--obsoleted] [--provides] Checks the local packagedb and produces information on any problems it finds. You can limit the checks to be performed by using the --dependencies, --duplicates, --obsoleted and --provides options (the default is to check everything). Check-Update Command Command: check-update Aliases: check-upgrade dnf [options] check-update [--changelogs] [<package-file-spec>...] Non-interactively checks if updates of the specified packages are available. If no <package-file-spec> is given, checks whether any updates at all are available for your system. DNF exit code will be 100 when there are updates available and a list of the updates will be printed, 0 if not and 1 if an error occurs. If --changelogs option is specified, also changelog delta of packages about to be updated is printed. Please note that having a specific newer version available for an installed package (and reported by check-update) does not imply that subsequent dnf upgrade will install it. The difference is that dnf upgrade has restrictions (like package dependencies being satisfied) to take into account. The output is affected by the autocheck_running_kernel configuration option. Clean Command Command: clean Performs cleanup of temporary files kept for repositories. This includes any such data left behind from disabled or removed repositories as well as for different distribution release versions. dnf clean dbcache Removes cache files generated from the repository metadata. This forces DNF to regenerate the cache files the next time it is run. dnf clean expire-cache Marks the repository metadata expired. DNF will re-validate the cache for each repository the next time it is used. dnf clean metadata Removes repository metadata. Those are the files which DNF uses to determine the remote availability of packages. Using this option will make DNF download all the metadata the next time it is run. dnf clean packages Removes any cached packages from the system. dnf clean all Does all of the above. Deplist Command dnf [options] deplist [<select-options>] [<query-options>] [<package-spec>] Deprecated alias for dnf repoquery --deplist. Distro-Sync Command Command: distro-sync Aliases: dsync Deprecated aliases: distrosync, distribution-synchronization dnf distro-sync [<package-spec>...] As necessary upgrades, downgrades or keeps selected installed packages to match the latest version available from any enabled repository. If no package is given, all installed packages are considered. See also Configuration Files Replacement Policy. Downgrade Command Command: downgrade Aliases: dg dnf [options] downgrade <package-spec>... Downgrades the specified packages to the highest installable package of all known lower versions if possible. When version is given and is lower than version of installed package then it downgrades to target version. Group Command Command: group Aliases: grp Deprecated aliases: groups, grouplist, groupinstall, groupupdate, groupremove, grouperase, groupinfo Groups are virtual collections of packages. DNF keeps track of groups that the user selected ("marked") installed and can manipulate the comprising packages with simple commands. dnf [options] group [summary] <group-spec> Display overview of how many groups are installed and available. With a spec, limit the output to the matching groups. summary is the default groups subcommand. dnf [options] group info <group-spec> Display package lists of a group. Shows which packages are installed or available from a repository when -v is used. dnf [options] group install [--with-optional] <group-spec>... Mark the specified group installed and install packages it contains. Also include optional packages of the group if --with-optional is specified. All Mandatory and Default packages will be installed whenever possible. Conditional packages are installed if they meet their requirement. If the group is already (partially) installed, the command installs the missing packages from the group. Depending on the value of obsoletes configuration option group installation takes obsoletes into account. dnf [options] group list <group-spec>... List all matching groups, either among installed or available groups. If nothing is specified, list all known groups. --installed and --available options narrow down the requested list. Records are ordered by the display_order tag defined in comps.xml file. Provides a list of all hidden groups by using option --hidden. Provides group IDs when the -v or --ids options are used. dnf [options] group remove <group-spec>... Mark the group removed and remove those packages in the group from the system which do not belong to another installed group and were not installed explicitly by the user. dnf [options] group upgrade <group-spec>... Upgrades the packages from the group and upgrades the group itself. The latter comprises of installing packages that were added to the group by the distribution and removing packages that got removed from the group as far as they were not installed explicitly by the user. Groups can also be marked installed or removed without physically manipulating any packages: dnf [options] group mark install <group-spec>... Mark the specified group installed. No packages will be installed by this command, but the group is then considered installed. dnf [options] group mark remove <group-spec>... Mark the specified group removed. No packages will be removed by this command. See also Configuration Files Replacement Policy. Help Command Command: help dnf help [<command>] Displays the help text for all commands. If given a command name then only displays help for that particular command. History Command Command: history Aliases: hist The history command allows the user to view what has happened in past transactions and act according to this information (assuming the history_record configuration option is set). dnf history [list] [--reverse] [<spec>...] The default history action is listing information about given transactions in a table. Each <spec> can be either a <transaction-spec>, which specifies a transaction directly, or a <transaction-spec>..<transaction-spec>, which specifies a range of transactions, or a <package-name-spec>, which specifies a transaction by a package which it manipulated. When no transaction is specified, list all known transactions. The "Action(s)" column lists each type of action taken in the transaction. The possible values are: Install (I): a new package was installed on the system Downgrade (D): an older version of a package replaced the previously-installed version Obsolete (O): an obsolete package was replaced by a new package Upgrade (U): a newer version of the package replaced the previously-installed version Remove (E): a package was removed from the system Reinstall (R): a package was reinstalled with the same version Reason change (C): a package was kept in the system but its reason for being installed changed The "Altered" column lists the number of actions taken in each transaction, possibly followed by one or two of the following symbols: >: The RPM database was changed, outside DNF, after the transaction <: The RPM database was changed, outside DNF, before the transaction *: The transaction aborted before completion #: The transaction completed, but with a non-zero status E: The transaction completed successfully, but had warning/error output --reverse The order of history list output is printed in reverse order. dnf history info [<spec>...] Describe the given transactions. The meaning of <spec> is the same as in the History List Command. When no transaction is specified, describe what happened during the latest transaction. dnf history redo <transaction-spec>|<package-file-spec> Repeat the specified transaction. Uses the last transaction (with the highest ID) if more than one transaction for given <package-file-spec> is found. If it is not possible to redo some operations due to the current state of RPMDB, it will not redo the transaction. dnf history replay [--ignore-installed] [--ignore-extras] [--skip-unavailable] <filename> Replay a transaction stored in file <filename> by History Store Command. The replay will perform the exact same operations on the packages as in the original transaction and will return with an error if case of any differences in installed packages or their versions. See also the Transaction JSON Format specification of the file format. --ignore-installed Don't check for the installed packages being in the same state as those recorded in the transaction. E.g. in case there is an upgrade foo-1.0 -> foo-2.0 stored in the transaction, but there is foo-1.1 installed on the target system. --ignore-extras Don't check for extra packages pulled into the transaction on the target system. E.g. the target system may not have some dependency, which was installed on the source system. The replay errors out on this by default, as the transaction would not be the same. --skip-unavailable In case some packages stored in the transaction are not available on the target system, skip them instead of erroring out. dnf history rollback <transaction-spec>|<package-file-spec> Undo all transactions performed after the specified transaction. Uses the last transaction (with the highest ID) if more than one transaction for given <package-file-spec> is found. If it is not possible to undo some transactions due to the current state of RPMDB, it will not undo any transaction. dnf history store [--output <output-file>] <transaction-spec> Store a transaction specified by <transaction-spec>. The transaction can later be replayed by the History Replay Command. Warning: The stored transaction format is considered unstable and may change at any time. It will work if the same version of dnf is used to store and replay (or between versions as long as it stays the same). -o <output-file>, --output=<output-file> Store the serialized transaction into <output-file. Default is transaction.json. dnf history undo <transaction-spec>|<package-file-spec> Perform the opposite operation to all operations performed in the specified transaction. Uses the last transaction (with the highest ID) if more than one transaction for given <package-file-spec> is found. If it is not possible to undo some operations due to the current state of RPMDB, it will not undo the transaction. dnf history userinstalled Show all installonly packages, packages installed outside of DNF and packages not installed as dependency. I.e. it lists packages that will stay on the system when Autoremove Command or Remove Command along with clean_requirements_on_remove configuration option set to True is executed. Note the same results can be accomplished with dnf repoquery --userinstalled, and the repoquery command is more powerful in formatting of the output. This command by default does not force a sync of expired metadata, except for the redo, rollback, and undo subcommands. See also Metadata Synchronization and Configuration Files Replacement Policy. Info Command Command: info Aliases: if dnf [options] info [<package-file-spec>...] Lists description and summary information about installed and available packages. The info command limits the displayed packages the same way as the list command. This command by default does not force a sync of expired metadata. See also Metadata Synchronization. Install Command Command: install Aliases: in Aliases for explicit NEVRA matching: install-n, install-na, install-nevra Deprecated aliases: localinstall dnf [options] install <spec>... Makes sure that the given packages and their dependencies are installed on the system. Each <spec> can be either a <package-spec>, or a @<module-spec>, or a @<group-spec>. See Install Examples. If a given package or provide cannot be (and is not already) installed, the exit code will be non-zero. If the <spec> matches both a @ <module-spec> and a @<group-spec>, only the module is installed. When <package-spec> to specify the exact version of the package is given, DNF will install the desired version, no matter which version of the package is already installed. The former version of the package will be removed in the case of non-installonly package. On the other hand if <package-spec> specifies only a name, DNF also takes into account packages obsoleting it when picking which package to install. This behaviour is specific to the install command. Note that this can lead to seemingly unexpected results if a package has multiple versions and some older version is being obsoleted. It creates a split in the upgrade-path and both ways are considered correct, the resulting package is picked simply by lexicographical order. There are also a few specific install commands install-n, install-na and install-nevra that allow the specification of an exact argument in the NEVRA format. See also Configuration Files Replacement Policy. Install Examples dnf install tito Install the tito package (tito is the package name). dnf install ~/Downloads/tito-0.6.2-1.fc22.noarch.rpm Install a local rpm file tito-0.6.2-1.fc22.noarch.rpm from the ~/Downloads/ directory. dnf install tito-0.5.6-1.fc22 Install the package with a specific version. If the package is already installed it will automatically try to downgrade or upgrade to the specific version. dnf --best install tito Install the latest available version of the package. If the package is already installed it will try to automatically upgrade to the latest version. If the latest version of the package cannot be installed, the installation will fail. dnf install vim DNF will automatically recognize that vim is not a package name, but will look up and install a package that provides vim with all the required dependencies. Note: Package name match has precedence over package provides match. dnf install https://kojipkgs.fedoraproject.org//packages/tito/0.6.0/1.fc22/noarch/tito-0.6.0-1.fc22.noarch.rpm Install a package directly from a URL. dnf install '@docker' Install all default profiles of module 'docker' and their RPMs. Module streams get enabled accordingly. dnf install '@Web Server' Install the 'Web Server' environmental group. dnf install /usr/bin/rpmsign Install a package that provides the /usr/bin/rpmsign file. dnf -y install tito --setopt=install_weak_deps=False Install the tito package (tito is the package name) without weak deps. Weak deps are not required for core functionality of the package, but they enhance the original package (like extended documentation, plugins, additional functions, etc.). dnf install --advisory=FEDORA-2018-b7b99fe852 \* Install all packages that belong to the "FEDORA-2018-b7b99fe852" advisory. List Command Command: list Aliases: ls Prints lists of packages depending on the packages' relation to the system. A package is installed if it is present in the RPMDB, and it is available if it is not installed but is present in a repository that DNF knows about. The list command also limits the displayed packages according to specific criteria, e.g. to only those that update an installed package (respecting the repository priority). The exclude option in the configuration file can influence the result, but if the - -disableexcludes command line option is used, it ensures that all installed packages will be listed. dnf [options] list [--all] [<package-file-spec>...] Lists all packages, present in the RPMDB, in a repository or both. dnf [options] list --installed [<package-file-spec>...] Lists installed packages. dnf [options] list --available [<package-file-spec>...] Lists available packages. dnf [options] list --extras [<package-file-spec>...] Lists extras, that is packages installed on the system that are not available in any known repository. dnf [options] list --obsoletes [<package-file-spec>...] List packages installed on the system that are obsoleted by packages in any known repository. dnf [options] list --recent [<package-file-spec>...] List packages recently added into the repositories. dnf [options] list --upgrades [<package-file-spec>...] List upgrades available for the installed packages. dnf [options] list --autoremove List packages which will be removed by the dnf autoremove command. This command by default does not force a sync of expired metadata. See also Metadata Synchronization. Makecache Command Command: makecache Aliases: mc dnf [options] makecache Downloads and caches metadata for enabled repositories. Tries to avoid downloading whenever possible (e.g. when the local metadata hasn't expired yet or when the metadata timestamp hasn't changed). dnf [options] makecache --timer Like plain makecache, but instructs DNF to be more resource-aware, meaning it will not do anything if running on battery power and will terminate immediately if it's too soon after the last successful makecache run (see dnf.conf(5), metadata_timer_sync). Mark Command Command: mark dnf mark install <package-spec>... Marks the specified packages as installed by user. This can be useful if any package was installed as a dependency and is desired to stay on the system when Autoremove Command or Remove Command along with clean_requirements_on_remove configuration option set to True is executed. dnf mark remove <package-spec>... Unmarks the specified packages as installed by user. Whenever you as a user don't need a specific package you can mark it for removal. The package stays installed on the system but will be removed when Autoremove Command or Remove Command along with clean_requirements_on_remove configuration option set to True is executed. You should use this operation instead of Remove Command if you're not sure whether the package is a requirement of other user installed packages on the system. dnf mark group <package-spec>... Marks the specified packages as installed by group. This can be useful if any package was installed as a dependency or a user and is desired to be protected and handled as a group member like during group remove. Module Command Command: module Modularity overview is available at man page dnf.modularity(7). Module subcommands take <module-spec>... arguments that specify modules or profiles. dnf [options] module install <module-spec>... Install module profiles, including their packages. In case no profile was provided, all default profiles get installed. Module streams get enabled accordingly. This command cannot be used for switching module streams. Use the dnf module switch-to command for that. dnf [options] module update <module-spec>... Update packages associated with an active module stream, optionally restricted to a profile. If the profile_name is provided, only the packages referenced by that profile will be updated. dnf [options] module switch-to <module-spec>... Switch to or enable a module stream, change versions of installed packages to versions provided by the new stream, and remove packages from the old stream that are no longer available. It also updates installed profiles if they are available for the new stream. When a profile was provided, it installs that profile and does not update any already installed profiles. This command can be used as a stronger version of the dnf module enable command, which not only enables modules, but also does a distrosync to all modular packages in the enabled modules. It can also be used as a stronger version of the dnf module install command, but it requires to specify profiles that are supposed to be installed, because switch-to command does not use default profiles. The switch-to command doesn't only install profiles, it also makes a distrosync to all modular packages in the installed module. dnf [options] module remove <module-spec>... Remove installed module profiles, including packages that were installed with the dnf module install command. Will not remove packages required by other installed module profiles or by other user-installed packages. In case no profile was provided, all installed profiles get removed. dnf [options] module remove --all <module-spec>... Remove installed module profiles, including packages that were installed with the dnf module install command. With --all option it additionally removes all packages whose names are provided by specified modules. Packages required by other installed module profiles and packages whose names are also provided by any other module are not removed. dnf [options] module enable <module-spec>... Enable a module stream and make the stream RPMs available in the package set. Modular dependencies are resolved, dependencies checked and also recursively enabled. In case of modular dependency issue the operation will be rejected. To perform the action anyway please use --skip-broken option. This command cannot be used for switching module streams. Use the dnf module switch-to command for that. dnf [options] module disable <module-name>... Disable a module. All related module streams will become unavailable. Consequently, all installed profiles will be removed and the module RPMs will become unavailable in the package set. In case of modular dependency issue the operation will be rejected. To perform the action anyway please use --skip-broken option. dnf [options] module reset <module-name>... Reset module state so it's no longer enabled or disabled. Consequently, all installed profiles will be removed and only RPMs from the default stream will be available in the package set. dnf [options] module provides <package-name-spec>... Lists all modular packages matching <package-name-spec> from all modules (including disabled), along with the modules and streams they belong to. dnf [options] module list [--all] [module_name...] Lists all module streams, their profiles and states (enabled, disabled, default). dnf [options] module list --enabled [module_name...] Lists module streams that are enabled. dnf [options] module list --disabled [module_name...] Lists module streams that are disabled. dnf [options] module list --installed [module_name...] List module streams with installed profiles. dnf [options] module info <module-spec>... Print detailed information about given module stream. dnf [options] module info --profile <module-spec>... Print detailed information about given module profiles. dnf [options] module repoquery <module-spec>... List all available packages belonging to selected modules. dnf [options] module repoquery --available <module-spec>... List all available packages belonging to selected modules. dnf [options] module repoquery --installed <module-spec>... List all installed packages with same name like packages belonging to selected modules. Provides Command Command: provides Aliases: prov, whatprovides, wp dnf [options] provides <provide-spec> Finds the packages providing the given <provide-spec>. This is useful when one knows a filename and wants to find what package (installed or not) provides this file. The <provide-spec> is gradually looked for at following locations: 1. The <provide-spec> is matched with all file provides of any available package: $ dnf provides /usr/bin/gzip gzip-1.9-9.fc29.x86_64 : The GNU data compression program Matched from: Filename : /usr/bin/gzip 2. Then all provides of all available packages are searched: $ dnf provides "gzip(x86-64)" gzip-1.9-9.fc29.x86_64 : The GNU data compression program Matched from: Provide : gzip(x86-64) = 1.9-9.fc29 3. DNF assumes that the <provide-spec> is a system command, prepends it with /usr/bin/, /usr/sbin/ prefixes (one at a time) and does the file provides search again. For legacy reasons (packages that didn't do UsrMove) also /bin and /sbin prefixes are being searched: $ dnf provides zless gzip-1.9-9.fc29.x86_64 : The GNU data compression program Matched from: Filename : /usr/bin/zless 4. If this last step also fails, DNF returns "Error: No Matches found". This command by default does not force a sync of expired metadata. See also Metadata Synchronization. Reinstall Command Command: reinstall Aliases: rei dnf [options] reinstall <package-spec>... Installs the specified packages, fails if some of the packages are either not installed or not available (i.e. there is no repository where to download the same RPM). Remove Command Command: remove Aliases: rm Aliases for explicit NEVRA matching: remove-n, remove-na, remove-nevra Deprecated aliases: erase, erase-n, erase-na, erase-nevra dnf [options] remove <package-spec>... Removes the specified packages from the system along with any packages depending on the packages being removed. Each <spec> can be either a <package-spec>, which specifies a package directly, or a @<group-spec>, which specifies an (environment) group which contains it. If clean_requirements_on_remove is enabled (the default), also removes any dependencies that are no longer needed. dnf [options] remove --duplicates Removes older versions of duplicate packages. To ensure the integrity of the system it reinstalls the newest package. In some cases the command cannot resolve conflicts. In such cases the dnf shell command with remove --duplicates and upgrade dnf-shell sub-commands could help. dnf [options] remove --oldinstallonly Removes old installonly packages, keeping only latest versions and version of running kernel. There are also a few specific remove commands remove-n, remove-na and remove-nevra that allow the specification of an exact argument in the NEVRA format. Remove Examples dnf remove acpi tito Remove the acpi and tito packages. dnf remove $(dnf repoquery --extras --exclude=tito,acpi) Remove packages not present in any repository, but don't remove the tito and acpi packages (they still might be removed if they depend on some of the removed packages). Remove older versions of duplicated packages (an equivalent of yum's package-cleanup --cleandups): dnf remove --duplicates Repoinfo Command Command: repoinfo An alias for the repolist command that provides more detailed information like dnf repolist -v. Repolist Command Command: repolist dnf [options] repolist [--enabled|--disabled|--all] Depending on the exact command lists enabled, disabled or all known repositories. Lists all enabled repositories by default. Provides more detailed information when -v option is used. This command by default does not force a sync of expired metadata. See also Metadata Synchronization. Repoquery Command Command: repoquery Aliases: rq Aliases for explicit NEVRA matching: repoquery-n, repoquery-na, repoquery-nevra dnf [options] repoquery [<select-options>] [<query-options>] [<package-file-spec>] Searches available DNF repositories for selected packages and displays the requested information about them. It is an equivalent of rpm -q for remote repositories. dnf [options] repoquery --groupmember <package-spec>... List groups that contain <package-spec>. dnf [options] repoquery --querytags Provides the list of tags recognized by the --queryformat repoquery option. There are also a few specific repoquery commands repoquery-n, repoquery-na and repoquery-nevra that allow the specification of an exact argument in the NEVRA format (does not affect arguments of options like --whatprovides <arg>, ...). Select Options Together with <package-file-spec>, control what packages are displayed in the output. If <package-file-spec> is given, limits the resulting set of packages to those matching the specification. All packages are considered if no <package-file-spec> is specified. <package-file-spec> Package specification in the NEVRA format (name[-[epoch:]version[-release]][.arch]), a package provide or a file provide. See Specifying Packages. -a, --all Query all packages (for rpmquery compatibility, also a shorthand for repoquery '*' or repoquery without arguments). --arch <arch>[,<arch>...], --archlist <arch>[,<arch>...] Limit the resulting set only to packages of selected architectures (default is all architectures). In some cases the result is affected by the basearch of the running system, therefore to run repoquery for an arch incompatible with your system use the --forcearch=<arch> option to change the basearch. --duplicates Limit the resulting set to installed duplicate packages (i.e. more package versions for the same name and architecture). Installonly packages are excluded from this set. --unneeded Limit the resulting set to leaves packages that were installed as dependencies so they are no longer needed. This switch lists packages that are going to be removed after executing the dnf autoremove command. --available Limit the resulting set to available packages only (set by default). --disable-modular-filtering Disables filtering of modular packages, so that packages of inactive module streams are included in the result. --extras Limit the resulting set to packages that are not present in any of the available repositories. -f <file>, --file <file> Limit the resulting set only to the package that owns <file>. --installed Limit the resulting set to installed packages only. The exclude option in the configuration file might influence the result, but if the command line option - -disableexcludes is used, it ensures that all installed packages will be listed. --installonly Limit the resulting set to installed installonly packages. --latest-limit <number> Limit the resulting set to <number> of latest packages for every package name and architecture. If <number> is negative, skip <number> of latest packages. For a negative <number> use the --latest-limit=<number> syntax. --recent Limit the resulting set to packages that were recently edited. --repo <repoid> Limit the resulting set only to packages from a repository identified by <repoid>. Can be used multiple times with accumulative effect. --unsatisfied Report unsatisfied dependencies among installed packages (i.e. missing requires and existing conflicts). --upgrades Limit the resulting set to packages that provide an upgrade for some already installed package. --userinstalled Limit the resulting set to packages installed by the user. The exclude option in the configuration file might influence the result, but if the command line option - -disableexcludes is used, it ensures that all installed packages will be listed. --whatdepends <capability>[,<capability>...] Limit the resulting set only to packages that require, enhance, recommend, suggest or supplement any of <capabilities>. --whatconflicts <capability>[,<capability>...] Limit the resulting set only to packages that conflict with any of <capabilities>. --whatenhances <capability>[,<capability>...] Limit the resulting set only to packages that enhance any of <capabilities>. Use --whatdepends if you want to list all depending packages. --whatobsoletes <capability>[,<capability>...] Limit the resulting set only to packages that obsolete any of <capabilities>. --whatprovides <capability>[,<capability>...] Limit the resulting set only to packages that provide any of <capabilities>. --whatrecommends <capability>[,<capability>...] Limit the resulting set only to packages that recommend any of <capabilities>. Use --whatdepends if you want to list all depending packages. --whatrequires <capability>[,<capability>...] Limit the resulting set only to packages that require any of <capabilities>. Use --whatdepends if you want to list all depending packages. --whatsuggests <capability>[,<capability>...] Limit the resulting set only to packages that suggest any of <capabilities>. Use --whatdepends if you want to list all depending packages. --whatsupplements <capability>[,<capability>...] Limit the resulting set only to packages that supplement any of <capabilities>. Use --whatdepends if you want to list all depending packages. --alldeps This option is stackable with --whatrequires or - -whatdepends only. Additionally it adds all packages requiring the package features to the result set (used as default). --exactdeps This option is stackable with --whatrequires or - -whatdepends only. Limit the resulting set only to packages that require <capability> specified by --whatrequires. --srpm Operate on the corresponding source RPM. Query Options Set what information is displayed about each package. The following are mutually exclusive, i.e. at most one can be specified. If no query option is given, matching packages are displayed in the standard NEVRA notation. -i, --info Show detailed information about the package. -l, --list Show the list of files in the package. -s, --source Show the package source RPM name. --changelogs Print the package changelogs. --conflicts Display capabilities that the package conflicts with. Same as --qf "%{conflicts}. --depends Display capabilities that the package depends on, enhances, recommends, suggests or supplements. --enhances Display capabilities enhanced by the package. Same as --qf "%{enhances}"". --location Show a location where the package could be downloaded from. --obsoletes Display capabilities that the package obsoletes. Same as --qf "%{obsoletes}". --provides Display capabilities provided by the package. Same as --qf "%{provides}". --recommends Display capabilities recommended by the package. Same as --qf "%{recommends}". --requires Display capabilities that the package depends on. Same as --qf "%{requires}". --requires-pre Display capabilities that the package depends on for running a %pre script. Same as --qf "%{requires-pre}". --suggests Display capabilities suggested by the package. Same as --qf "%{suggests}". --supplements Display capabilities supplemented by the package. Same as --qf "%{supplements}". --tree Display a recursive tree of packages with capabilities specified by one of the following supplementary options: --whatrequires, --requires, --conflicts, --enhances, --suggests, --provides, --supplements, --recommends. --deplist Produce a list of all direct dependencies and what packages provide those dependencies for the given packages. The result only shows the newest providers (which can be changed by using --verbose). --nvr Show found packages in the name-version-release format. Same as --qf "%{name}-%{version}-%{release}". --nevra Show found packages in the name-epoch:version-release.architecture format. Same as --qf "%{name}-%{epoch}:%{version}-%{release}.%{arch}" (default). --envra Show found packages in the epoch:name-version-release.architecture format. Same as --qf "%{epoch}:%{name}-%{version}-%{release}.%{arch}" --qf <format>, --queryformat <format> Custom display format. <format> is the string to output for each matched package. Every occurrence of %{<tag>} within is replaced by the corresponding attribute of the package. The list of recognized tags can be displayed by running dnf repoquery --querytags. --recursive Query packages recursively. Has to be used with --whatrequires <REQ> (optionally with --alldeps, but not with --exactdeps) or with --requires <REQ> --resolve. --resolve resolve capabilities to originating package(s). Examples Display NEVRAs of all available packages matching light*: dnf repoquery 'light*' Display NEVRAs of all available packages matching name light* and architecture noarch (accepts only arguments in the "<name>.<arch>" format): dnf repoquery-na 'light*.noarch' Display requires of all lighttpd packages: dnf repoquery --requires lighttpd Display packages providing the requires of python packages: dnf repoquery --requires python --resolve Display source rpm of ligttpd package: dnf repoquery --source lighttpd Display package name that owns the given file: dnf repoquery --file /etc/lighttpd/lighttpd.conf Display name, architecture and the containing repository of all lighttpd packages: dnf repoquery --queryformat '%{name}.%{arch} : %{reponame}' lighttpd Display all available packages providing "webserver": dnf repoquery --whatprovides webserver Display all available packages providing "webserver" but only for "i686" architecture: dnf repoquery --whatprovides webserver --arch i686 Display duplicate packages: dnf repoquery --duplicates Display source packages that require a <provide> for a build: dnf repoquery --disablerepo="*" --enablerepo="*-source" --arch=src --whatrequires <provide> Repository-Packages Command Command: repository-packages Deprecated aliases: repo-pkgs, repo-packages, repository-pkgs The repository-packages command allows the user to run commands on top of all packages in the repository named <repoid>. However, any dependency resolution takes into account packages from all enabled repositories. The <package-file-spec> and <package-spec> specifications further limit the candidates to only those packages matching at least one of them. The info subcommand lists description and summary information about packages depending on the packages' relation to the repository. The list subcommand just prints lists of those packages. dnf [options] repository-packages <repoid> check-update [<package-file-spec>...] Non-interactively checks if updates of the specified packages in the repository are available. DNF exit code will be 100 when there are updates available and a list of the updates will be printed. dnf [options] repository-packages <repoid> info [--all] [<package-file-spec>...] List all related packages. dnf [options] repository-packages <repoid> info --installed [<package-file-spec>...] List packages installed from the repository. dnf [options] repository-packages <repoid> info --available [<package-file-spec>...] List packages available in the repository but not currently installed on the system. dnf [options] repository-packages <repoid> info --extras [<package-file-specs>...] List packages installed from the repository that are not available in any repository. dnf [options] repository-packages <repoid> info --obsoletes [<package-file-spec>...] List packages in the repository that obsolete packages installed on the system. dnf [options] repository-packages <repoid> info --recent [<package-file-spec>...] List packages recently added into the repository. dnf [options] repository-packages <repoid> info --upgrades [<package-file-spec>...] List packages in the repository that upgrade packages installed on the system. dnf [options] repository-packages <repoid> install [<package-spec>...] Install packages matching <package-spec> from the repository. If <package-spec> isn't specified at all, install all packages from the repository. dnf [options] repository-packages <repoid> list [--all] [<package-file-spec>...] List all related packages. dnf [options] repository-packages <repoid> list --installed [<package-file-spec>...] List packages installed from the repository. dnf [options] repository-packages <repoid> list --available [<package-file-spec>...] List packages available in the repository but not currently installed on the system. dnf [options] repository-packages <repoid> list --extras [<package-file-spec>...] List packages installed from the repository that are not available in any repository. dnf [options] repository-packages <repoid> list --obsoletes [<package-file-spec>...] List packages in the repository that obsolete packages installed on the system. dnf [options] repository-packages <repoid> list --recent [<package-file-spec>...] List packages recently added into the repository. dnf [options] repository-packages <repoid> list --upgrades [<package-file-spec>...] List packages in the repository that upgrade packages installed on the system. dnf [options] repository-packages <repoid> move-to [<package-spec>...] Reinstall all those packages that are available in the repository. dnf [options] repository-packages <repoid> reinstall [<package-spec>...] Run the reinstall-old subcommand. If it fails, run the move-to subcommand. dnf [options] repository-packages <repoid> reinstall-old [<package-spec>...] Reinstall all those packages that were installed from the repository and simultaneously are available in the repository. dnf [options] repository-packages <repoid> remove [<package-spec>...] Remove all packages installed from the repository along with any packages depending on the packages being removed. If clean_requirements_on_remove is enabled (the default) also removes any dependencies that are no longer needed. dnf [options] repository-packages <repoid> remove-or-distro-sync [<package-spec>...] Select all packages installed from the repository. Upgrade, downgrade or keep those of them that are available in another repository to match the latest version available there and remove the others along with any packages depending on the packages being removed. If clean_requirements_on_remove is enabled (the default) also removes any dependencies that are no longer needed. dnf [options] repository-packages <repoid> remove-or-reinstall [<package-spec>...] Select all packages installed from the repository. Reinstall those of them that are available in another repository and remove the others along with any packages depending on the packages being removed. If clean_requirements_on_remove is enabled (the default) also removes any dependencies that are no longer needed. dnf [options] repository-packages <repoid> upgrade [<package-spec>...] Update all packages to the highest resolvable version available in the repository. When versions are specified in the <package-spec>, update to these versions. dnf [options] repository-packages <repoid> upgrade-to [<package-specs>...] A deprecated alias for the upgrade subcommand. Search Command Command: search Aliases: se dnf [options] search [--all] <keywords>... Search package metadata for keywords. Keywords are matched as case-insensitive substrings, globbing is supported. By default lists packages that match all requested keys (AND operation). Keys are searched in package names and summaries. If the --all option is used, lists packages that match at least one of the keys (an OR operation). In addition the keys are searched in the package descriptions and URLs. The result is sorted from the most relevant results to the least. This command by default does not force a sync of expired metadata. See also Metadata Synchronization. Shell Command Command: shell Aliases: sh dnf [options] shell [filename] Open an interactive shell for conducting multiple commands during a single execution of DNF. These commands can be issued manually or passed to DNF from a file. The commands are much the same as the normal DNF command line options. There are a few additional commands documented below. config [conf-option] [value] Set a configuration option to a requested value. If no value is given it prints the current value. repo [list|enable|disable] [repo-id] list: list repositories and their status enable: enable repository disable: disable repository transaction [list|reset|solve|run] list: resolve and list the content of the transaction reset: reset the transaction run: resolve and run the transaction Note that all local packages must be used in the first shell transaction subcommand (e.g. install /tmp/nodejs-1-1.x86_64.rpm /tmp/acpi-1-1.noarch.rpm) otherwise an error will occur. Any disable, enable, and reset module operations (e.g. module enable nodejs) must also be performed before any other shell transaction subcommand is used. Swap Command Command: swap dnf [options] swap <remove-spec> <install-spec> Remove spec and install spec in one transaction. Each <spec> can be either a <package-spec>, which specifies a package directly, or a @<group-spec>, which specifies an (environment) group which contains it. Automatic conflict solving is provided in DNF by the --allowerasing option that provides the functionality of the swap command automatically. Updateinfo Command Command: updateinfo Aliases: upif Deprecated aliases: list-updateinfo, list-security, list-sec, info-updateinfo, info-security, info-sec, summary-updateinfo dnf [options] updateinfo [--summary|--list|--info] [<availability>] [<spec>...] Display information about update advisories. Depending on the output type, DNF displays just counts of advisory types (omitted or --summary), list of advisories (--list) or detailed information (--info). The -v option extends the output. When used with --info, the information is even more detailed. When used with --list, an additional column with date of the last advisory update is added. <availability> specifies whether advisories about newer versions of installed packages (omitted or --available), advisories about equal and older versions of installed packages (--installed), advisories about newer versions of those installed packages for which a newer version is available (--updates) or advisories about any versions of installed packages (--all) are taken into account. Most of the time, --available and --updates displays the same output. The outputs differ only in the cases when an advisory refers to a newer version but there is no enabled repository which contains any newer version. Note, that --available takes only the latest installed versions of packages into account. In case of the kernel packages (when multiple version could be installed simultaneously) also packages of the currently running version of kernel are added. To print only advisories referencing a CVE or a bugzilla use --with-cve or --with-bz options. When these switches are used also the output of the --list is altered - the ID of the CVE or the bugzilla is printed instead of the one of the advisory. If given and if neither ID, type (bugfix, enhancement, security/sec) nor a package name of an advisory matches <spec>, the advisory is not taken into account. The matching is case-sensitive and in the case of advisory IDs and package names, globbing is supported. Output of the --summary option is affected by the autocheck_running_kernel configuration option. Upgrade Command Command: upgrade Aliases: up Deprecated aliases: update, upgrade-to, update-to, localupdate dnf [options] upgrade Updates each package to the latest version that is both available and resolvable. dnf [options] upgrade <package-spec>... Updates each specified package to the latest available version. Updates dependencies as necessary. When versions are specified in the <package-spec>, update to these versions. dnf [options] upgrade @<spec>... Alias for the dnf module update command. If the main obsoletes configure option is true or the --obsoletes flag is present, dnf will include package obsoletes in its calculations. For more information see obsoletes. See also Configuration Files Replacement Policy. Upgrade-Minimal Command Command: upgrade-minimal Aliases: up-min Deprecated aliases: update-minimal dnf [options] upgrade-minimal Updates each package to the latest available version that provides a bugfix, enhancement or a fix for a security issue (security). dnf [options] upgrade-minimal <package-spec>... Updates each specified package to the latest available version that provides a bugfix, enhancement or a fix for security issue (security). Updates dependencies as necessary. SPECIFYING PACKAGES top Many commands take a <package-spec> parameter that selects a package for the operation. The <package-spec> argument is matched against package NEVRAs, provides and file provides. <package-file-spec> is similar to <package-spec>, except provides matching is not performed. Therefore, <package-file-spec> is matched only against NEVRAs and file provides. <package-name-spec> is matched against NEVRAs only. Globs Package specification supports the same glob pattern matching that shell does, in all three above mentioned packages it matches against (NEVRAs, provides and file provides). The following patterns are supported: * Matches any number of characters. ? Matches any single character. [] Matches any one of the enclosed characters. A pair of characters separated by a hyphen denotes a range expression; any character that falls between those two characters, inclusive, is matched. If the first character following the [ is a ! or a ^ then any character not enclosed is matched. Note: Curly brackets ({}) are not supported. You can still use them in shells that support them and let the shell do the expansion, but if quoted or escaped, dnf will not expand them. NEVRA Matching When matching against NEVRAs, partial matching is supported. DNF tries to match the spec against the following list of NEVRA forms (in decreasing order of priority): name-[epoch:]version-release.arch name.arch name name-[epoch:]version-release name-[epoch:]version Note that name can in general contain dashes (e.g. package-with-dashes). The first form that matches any packages is used and the remaining forms are not tried. If none of the forms match any packages, an attempt is made to match the <package-spec> against full package NEVRAs. This is only relevant if globs are present in the <package-spec>. <package-spec> matches NEVRAs the same way <package-name-spec> does, but in case matching NEVRAs fails, it attempts to match against provides and file provides of packages as well. You can specify globs as part of any of the five NEVRA components. You can also specify a glob pattern to match over multiple NEVRA components (in other words, to match across the NEVRA separators). In that case, however, you need to write the spec to match against full package NEVRAs, as it is not possible to split such spec into NEVRA forms. Specifying NEVRA Matching Explicitly Some commands (autoremove, install, remove and repoquery) also have aliases with suffixes -n, -na and -nevra that allow to explicitly specify how to parse the arguments: Command install-n only matches against name. Command install-na only matches against name.arch. Command install-nevra only matches against name-[epoch:]version-release.arch. SPECIFYING PROVIDES top <provide-spec> in command descriptions means the command operates on packages providing the given spec. This can either be an explicit provide, an implicit provide (i.e. name of the package) or a file provide. The selection is case-sensitive and globbing is supported. Specifying File Provides If a spec starts with either / or */, it is considered as a potential file provide. SPECIFYING GROUPS top <group-spec> allows one to select (environment) groups a particular operation should work on. It is a case insensitive string (supporting globbing characters) that is matched against a group's ID, canonical name and name translated into the current LC_MESSAGES locale (if possible). SPECIFYING MODULES top <module-spec> allows one to select modules or profiles a particular operation should work on. It is in the form of NAME:STREAM:VERSION:CONTEXT:ARCH/PROFILE and supported partial forms are the following: NAME NAME:STREAM NAME:STREAM:VERSION NAME:STREAM:VERSION:CONTEXT all above combinations with ::ARCH (e.g. NAME::ARCH) NAME:STREAM:VERSION:CONTEXT:ARCH all above combinations with /PROFILE (e.g. NAME/PROFILE) In case stream is not specified, the enabled or the default stream is used, in this order. In case profile is not specified, the system default profile or the 'default' profile is used. SPECIFYING TRANSACTIONS top <transaction-spec> can be in one of several forms. If it is an integer, it specifies a transaction ID. Specifying last is the same as specifying the ID of the most recent transaction. The last form is last-<offset>, where <offset> is a positive integer. It specifies offset-th transaction preceding the most recent transaction. PACKAGE FILTERING top Package filtering filters packages out from the available package set, making them invisible to most of dnf commands. They cannot be used in a transaction. Packages can be filtered out by either Exclude Filtering or Modular Filtering. Exclude Filtering Exclude Filtering is a mechanism used by a user or by a DNF plugin to modify the set of available packages. Exclude Filtering can be modified by either includepkgs or excludepkgs configuration options in configuration files. The - -disableexcludes command line option can be used to override excludes from configuration files. In addition to user-configured excludes, plugins can also extend the set of excluded packages. To disable excludes from a DNF plugin you can use the - -disableplugin command line option. To disable all excludes for e.g. the install command you can use the following combination of command line options: dnf --disableexcludes=all --disableplugin="*" install bash Modular Filtering Please see the modularity documentation for details on how Modular Filtering works. With modularity, only RPM packages from active module streams are included in the available package set. RPM packages from inactive module streams, as well as non-modular packages with the same name or provides as a package from an active module stream, are filtered out. Modular filtering is not applied to packages added from the command line, installed packages, or packages from repositories with module_hotfixes=true in their .repo file. Disabling of modular filtering is not recommended, because it could cause the system to get into a broken state. To disable modular filtering for a particular repository, specify module_hotfixes=true in the .repo file or use --setopt=<repo_id>.module_hotfixes=true. To discover the module which contains an excluded package use dnf module provides. METADATA SYNCHRONIZATION top Correct operation of DNF depends on having access to up-to-date data from all enabled repositories but contacting remote mirrors on every operation considerably slows it down and costs bandwidth for both the client and the repository provider. The metadata_expire (see dnf.conf(5)) repository configuration option is used by DNF to determine whether a particular local copy of repository data is due to be re-synced. It is crucial that the repository providers set the option well, namely to a value where it is guaranteed that if particular metadata was available in time T on the server, then all packages it references will still be available for download from the server in time T + metadata_expire. To further reduce the bandwidth load, some of the commands where having up-to-date metadata is not critical (e.g. the list command) do not look at whether a repository is expired and whenever any version of it is locally available to the user's account, it will be used. For non-root use, see also the --cacheonly switch. Note that in all situations the user can force synchronization of all enabled repositories with the --refresh switch. CONFIGURATION FILES REPLACEMENT POLICY top The updated packages could replace the old modified configuration files with the new ones or keep the older files. Neither of the files are actually replaced. To the conflicting ones RPM gives additional suffix to the origin name. Which file should maintain the true name after transaction is not controlled by package manager but is specified by each package itself, following packaging guideline. FILES top Cache Files /var/cache/dnf Main Configuration /etc/dnf/dnf.conf Repository /etc/yum.repos.d/ SEE ALSO top dnf.conf(5), DNF Configuration Reference dnf-PLUGIN(8) for documentation on DNF plugins. dnf.modularity(7), Modularity overview. dnf-transaction-json(5), Stored Transaction JSON Format Specification. DNF project homepage ( https://github.com/rpm-software-management/dnf/ ) How to report a bug ( https://github.com/rpm-software-management/dnf/wiki/Bug-Reporting ) YUM project homepage (http://yum.baseurl.org/ ) AUTHOR top See AUTHORS in DNF source distribution. COPYRIGHT top 2012-2020, Red Hat, Licensed under GPLv2+ COLOPHON top This page is part of the dnf (DNF Package Manager) project. Information about the project can be found at https://github.com/rpm-software-management/dnf. It is not known how to report bugs for this man page; if you know, please send a mail to man-pages@man7.org. This page was obtained from the project's upstream Git repository https://github.com/rpm-software-management/dnf.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-08.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org 4.18.2 Dec 22, 2023 DNF(8) Pages that refer to this page: systemd-nspawn(1), dnf.conf(5), yum.conf(5) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. | # dnf\n\n> Package management utility for RHEL, Fedora, and CentOS (replaces yum).\n> For equivalent commands in other package managers, see <https://wiki.archlinux.org/title/Pacman/Rosetta>.\n> More information: <https://dnf.readthedocs.io>.\n\n- Upgrade installed packages to the newest available versions:\n\n`sudo dnf upgrade`\n\n- Search packages via keywords:\n\n`dnf search {{keyword1 keyword2 ...}}`\n\n- Display details about a package:\n\n`dnf info {{package}}`\n\n- Install a new package (use `-y` to confirm all prompts automatically):\n\n`sudo dnf install {{package1 package2 ...}}`\n\n- Remove a package:\n\n`sudo dnf remove {{package1 package2 ...}}`\n\n- List installed packages:\n\n`dnf list --installed`\n\n- Find which packages provide a given command:\n\n`dnf provides {{command}}`\n\n- View all past operations:\n\n`dnf history`\n |