Command
stringlengths
1
20
Text
stringlengths
86
185k
Summary
stringlengths
101
1.77k
column
The column utility formats its input into multiple columns. The util support three modes: columns are filled before rows This is the default mode (required by backward compatibility). rows are filled before columns This mode is enabled by option -x, --fillrows table Determine the number of columns the input contains and create a table. This mode is enabled by option -t, --table and columns formatting is possible to modify by --table-* options. Use this mode if not sure. The output is aligned to the terminal width in interactive mode and the 80 columns in non-interactive mode (see --output-width for more details). Input is taken from file, or otherwise from standard input. Empty lines are ignored and all invalid multibyte sequences are encoded by x<hex> convention. The argument columns for --table-* options is a comma separated list of the column names as defined by --table-columns, or names defined by --table-column or it’s column number in order as specified by input. It’s possible to mix names and numbers. The special placeholder '0' (e.g. -R0) may be used to specify all columns and '-1' (e.g. -R -1) to specify the last visible column. It’s possible to use ranges like '1-5' when addressing columns by numbers. -J, --json Use JSON output format to print the table, the option --table-columns is required and the option --table-name is recommended. -c, --output-width width Output is formatted to a width specified as number of characters. The original name of this option is --columns; this name is deprecated since v2.30. Note that input longer than width is not truncated by default. The default is a terminal width and the 80 columns in non-interactive mode. The column headers are never truncated. The placeholder "unlimited" (or 0) is possible to use to not restrict output width. This is recommended for example when output to the files rather than on terminal. -d, --table-noheadings Do not print header. This option allows the use of logical column names on the command line, but keeps the header hidden when printing the table. -o, --output-separator string Specify the columns delimiter for table output (default is two spaces). -s, --separator separators Specify the possible input item delimiters (default is whitespace). -t, --table Determine the number of columns the input contains and create a table. Columns are delimited with whitespace, by default, or with the characters supplied using the --output-separator option. Table output is useful for pretty-printing. -C, --table-column properties Define one column by comma separated list of column attributes. This option can be used more than once, every use defines just one column. The properties replace some of --table- options. For example --table-column name=FOO,right define one column where text is aligned to right. The option is mutually exclusive to --table-columns. The currently supported attributes are: name=string Specifies column name. trunc The column text can be truncated when necessary. The same as --table-truncate. right Right align text in the specified columns. The same as --table-right. width=number Specifies column width. The width is used as a hint only. The width is strictly followed only when strictwidth attribute is used too. strictwidth Strictly follow column width= setting. noextreme Specify columns where is possible to ignore unusually long cells. See --table-noextreme for more details. wrap Specify columns where is possible to use multi-line cell for long text when necessary. See --table-wrap. hide Don’t print specified columns. See --table-hide. json=type Define column type for JSON output, Supported are string, number and boolean. -N, --table-columns names Specify the columns names by comma separated list of names. The names are used for the table header or to address column in option argument. See also --table-column. -l, --table-columns-limit number Specify maximal number of the input columns. The last column will contain all remaining line data if the limit is smaller than the number of the columns in the input data. -R, --table-right columns Right align text in the specified columns. -T, --table-truncate columns Specify columns where text can be truncated when necessary, otherwise very long table entries may be printed on multiple lines. -E, --table-noextreme columns Specify columns where is possible to ignore unusually long (longer than average) cells when calculate column width. The option has impact to the width calculation and table formatting, but the printed text is not affected. The option is used for the last visible column by default. -e, --table-header-repeat Print header line for each page. -W, --table-wrap columns Specify columns where is possible to use multi-line cell for long text when necessary. -H, --table-hide columns Don’t print specified columns. The special placeholder '-' may be used to hide all unnamed columns (see --table-columns). -O, --table-order columns Specify columns order on output. -n, --table-name name Specify the table name used for JSON output. The default is "table". -m, --table-maxout Fill all available space on output. -L, --keep-empty-lines Preserve whitespace-only lines in the input. The default is ignore empty lines at all. This option’s original name was --table-empty-lines but is now deprecated because it gives the false impression that the option only applies to table mode. -r, --tree column Specify column to use tree-like output. Note that the circular dependencies and other anomalies in child and parent relation are silently ignored. -i, --tree-id column Specify column with line ID to create child-parent relation. -p, --tree-parent column Specify column with parent ID to create child-parent relation. -x, --fillrows Fill rows before filling columns. -h, --help Display help text and exit. -V, --version Print version and exit.
# column > Format `stdin` or a file into multiple columns. Columns are filled before > rows; the default separator is a whitespace. More information: > https://manned.org/column. * Format the output of a command for a 30 characters wide display: `printf "header1 header2\nbar foo\n" | column --output-width {{30}}` * Split columns automatically and auto-align them in a tabular format: `printf "header1 header2\nbar foo\n" | column --table` * Specify the column delimiter character for the `--table` option (e.g. "," for CSV) (defaults to whitespace): `printf "header1,header2\nbar,foo\n" | column --table --separator {{,}}` * Fill rows before filling columns: `printf "header1\nbar\nfoobar\n" | column --output-width {{30}} --fillrows`
seq
Print numbers from FIRST to LAST, in steps of INCREMENT. Mandatory arguments to long options are mandatory for short options too. -f, --format=FORMAT use printf style floating-point FORMAT -s, --separator=STRING use STRING to separate numbers (default: \n) -w, --equal-width equalize width by padding with leading zeroes --help display this help and exit --version output version information and exit If FIRST or INCREMENT is omitted, it defaults to 1. That is, an omitted INCREMENT defaults to 1 even when LAST is smaller than FIRST. The sequence of numbers ends when the sum of the current number and INCREMENT would become greater than LAST. FIRST, INCREMENT, and LAST are interpreted as floating point values. INCREMENT is usually positive if FIRST is smaller than LAST, and INCREMENT is usually negative if FIRST is greater than LAST. INCREMENT must not be 0; none of FIRST, INCREMENT and LAST may be NaN. FORMAT must be suitable for printing one argument of type 'double'; it defaults to %.PRECf if FIRST, INCREMENT, and LAST are all fixed point decimal numbers with maximum precision PREC, and to %g otherwise.
# seq > Output a sequence of numbers to `stdout`. More information: > https://www.gnu.org/software/coreutils/seq. * Sequence from 1 to 10: `seq 10` * Every 3rd number from 5 to 20: `seq 5 3 20` * Separate the output with a space instead of a newline: `seq -s " " 5 3 20` * Format output width to a minimum of 4 digits padding with zeros as necessary: `seq -f "%04g" 5 3 20`
fmt
Reformat each paragraph in the FILE(s), writing to standard output. The option -WIDTH is an abbreviated form of --width=DIGITS. With no FILE, or when FILE is -, read standard input. Mandatory arguments to long options are mandatory for short options too. -c, --crown-margin preserve indentation of first two lines -p, --prefix=STRING reformat only lines beginning with STRING, reattaching the prefix to reformatted lines -s, --split-only split long lines, but do not refill -t, --tagged-paragraph indentation of first line different from second -u, --uniform-spacing one space between words, two after sentences -w, --width=WIDTH maximum line width (default of 75 columns) -g, --goal=WIDTH goal width (default of 93% of width) --help display this help and exit --version output version information and exit
# fmt > Reformat a text file by joining its paragraphs and limiting the line width > to given number of characters (75 by default). More information: > https://www.gnu.org/software/coreutils/fmt. * Reformat a file: `fmt {{path/to/file}}` * Reformat a file producing output lines of (at most) `n` characters: `fmt -w {{n}} {{path/to/file}}` * Reformat a file without joining lines shorter than the given width together: `fmt -s {{path/to/file}}` * Reformat a file with uniform spacing (1 space between words and 2 spaces between paragraphs): `fmt -u {{path/to/file}}`
groups
The groups command displays the current group names or ID values. If the value does not have a corresponding entry in /etc/group, the value will be displayed as the numerical group value. The optional user parameter will display the groups for the named user.
# groups > Print group memberships for a user. See also: `groupadd`, `groupdel`, > `groupmod`. More information: https://www.gnu.org/software/coreutils/groups. * Print group memberships for the current user: `groups` * Print group memberships for a list of users: `groups {{username1 username2 ...}}`
nm
The nm utility shall display symbolic information appearing in the object file, executable file, or object-file library named by file. If no symbolic information is available for a valid input file, the nm utility shall report that fact, but not consider it an error condition. The default base used when numeric values are written is unspecified. On XSI-conformant systems, it shall be decimal if the -P option is not specified. The nm utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following options shall be supported: -A Write the full pathname or library name of an object on each line. -e Write only external (global) and static symbol information. -f Produce full output. Write redundant symbols (.text, .data, and .bss), normally suppressed. -g Write only external (global) symbol information. -o Write numeric values in octal (equivalent to -t o). -P Write information in a portable output format, as specified in the STDOUT section. -t format Write each numeric value in the specified format. The format shall be dependent on the single character used as the format option-argument: d decimal (default if -P is not specified). o octal. x hexadecimal (default if -P is specified). -u Write only undefined symbols. -v Sort output by value instead of by symbol name. -x Write numeric values in hexadecimal (equivalent to -t x).
# nm > List symbol names in object files. More information: https://manned.org/nm. * List global (extern) functions in a file (prefixed with T): `nm -g {{path/to/file.o}}` * List only undefined symbols in a file: `nm -u {{path/to/file.o}}` * List all symbols, even debugging symbols: `nm -a {{path/to/file.o}}` * Demangle C++ symbols (make them readable): `nm --demangle {{path/to/file.o}}`
git-stage
This is a synonym for git-add(1). Please refer to the documentation of that command.
# git stage > Add file contents to the staging area. Synonym of `git add`. More > information: https://git-scm.com/docs/git-stage. * Add a file to the index: `git stage {{path/to/file}}` * Add all files (tracked and untracked): `git stage -A` * Only add already tracked files: `git stage -u` * Also add ignored files: `git stage -f` * Interactively stage parts of files: `git stage -p` * Interactively stage parts of a given file: `git stage -p {{path/to/file}}` * Interactively stage a file: `git stage -i`
dd
Copy a file, converting and formatting according to the operands. bs=BYTES read and write up to BYTES bytes at a time (default: 512); overrides ibs and obs cbs=BYTES convert BYTES bytes at a time conv=CONVS convert the file as per the comma separated symbol list count=N copy only N input blocks ibs=BYTES read up to BYTES bytes at a time (default: 512) if=FILE read from FILE instead of stdin iflag=FLAGS read as per the comma separated symbol list obs=BYTES write BYTES bytes at a time (default: 512) of=FILE write to FILE instead of stdout oflag=FLAGS write as per the comma separated symbol list seek=N (or oseek=N) skip N obs-sized output blocks skip=N (or iseek=N) skip N ibs-sized input blocks status=LEVEL The LEVEL of information to print to stderr; 'none' suppresses everything but error messages, 'noxfer' suppresses the final transfer statistics, 'progress' shows periodic transfer statistics N and BYTES may be followed by the following multiplicative suffixes: c=1, w=2, b=512, kB=1000, K=1024, MB=1000*1000, M=1024*1024, xM=M, GB=1000*1000*1000, G=1024*1024*1024, and so on for T, P, E, Z, Y, R, Q. Binary prefixes can be used, too: KiB=K, MiB=M, and so on. If N ends in 'B', it counts bytes not blocks. Each CONV symbol may be: ascii from EBCDIC to ASCII ebcdic from ASCII to EBCDIC ibm from ASCII to alternate EBCDIC block pad newline-terminated records with spaces to cbs-size unblock replace trailing spaces in cbs-size records with newline lcase change upper case to lower case ucase change lower case to upper case sparse try to seek rather than write all-NUL output blocks swab swap every pair of input bytes sync pad every input block with NULs to ibs-size; when used with block or unblock, pad with spaces rather than NULs excl fail if the output file already exists nocreat do not create the output file notrunc do not truncate the output file noerror continue after read errors fdatasync physically write output file data before finishing fsync likewise, but also write metadata Each FLAG symbol may be: append append mode (makes sense only for output; conv=notrunc suggested) direct use direct I/O for data directory fail unless a directory dsync use synchronized I/O for data sync likewise, but also for metadata fullblock accumulate full blocks of input (iflag only) nonblock use non-blocking I/O noatime do not update access time nocache Request to drop cache. See also oflag=sync noctty do not assign controlling terminal from file nofollow do not follow symlinks Sending a USR1 signal to a running 'dd' process makes it print I/O statistics to standard error and then resume copying. Options are: --help display this help and exit --version output version information and exit
# dd > Convert and copy a file. More information: https://keith.github.io/xcode- > man-pages/dd.1.html. * Make a bootable USB drive from an isohybrid file (such like `archlinux-xxx.iso`) and show the progress: `dd if={{path/to/file.iso}} of={{/dev/usb_device}} status=progress` * Clone a drive to another drive with 4 MB block, ignore error and show the progress: `dd if={{/dev/source_device}} of={{/dev/dest_device}} bs={{4m}} conv={{noerror}} status=progress` * Generate a file of 100 random bytes by using kernel random driver: `dd if=/dev/urandom of={{path/to/random_file}} bs={{100}} count={{1}}` * Benchmark the write performance of a disk: `dd if=/dev/zero of={{path/to/1GB_file}} bs={{1024}} count={{1000000}}` * Generate a system backup into an IMG file and show the progress: `dd if=/dev/{{drive_device}} of={{path/to/file.img}} status=progress` * Restore a drive from an IMG file and show the progress: `dd if={{path/to/file.img}} of={{/dev/drive_device}} status=progress` * Check the progress of an ongoing dd operation (run this command from another shell): `kill -USR1 $(pgrep ^dd)`
prlimit
Given a process ID and one or more resources, prlimit tries to retrieve and/or modify the limits. When command is given, prlimit will run this command with the given arguments. The limits parameter is composed of a soft and a hard value, separated by a colon (:), in order to modify the existing values. If no limits are given, prlimit will display the current values. If one of the values is not given, then the existing one will be used. To specify the unlimited or infinity limit (RLIM_INFINITY), the -1 or 'unlimited' string can be passed. Because of the nature of limits, the soft limit must be lower or equal to the high limit (also called the ceiling). To see all available resource limits, refer to the RESOURCE OPTIONS section. • soft:_hard_ Specify both limits. • soft: Specify only the soft limit. • :hard Specify only the hard limit. • value Specify both limits to the same value.
# prlimit > Get or set process resource soft and hard limits. Given a process ID and one > or more resources, prlimit tries to retrieve and/or modify the limits. More > information: https://manned.org/prlimit. * Display limit values for all current resources for the running parent process: `prlimit` * Display limit values for all current resources of a specified process: `prlimit --pid {{pid number}}` * Run a command with a custom number of open files limit: `prlimit --nofile={{10}} {{command}}`
uniq
The uniq utility shall read an input file comparing adjacent lines, and write one copy of each input line on the output. The second and succeeding copies of repeated adjacent input lines shall not be written. The trailing <newline> of each line in the input shall be ignored when doing comparisons. Repeated lines in the input shall not be detected if they are not adjacent. The uniq utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines, except that '+' may be recognized as an option delimiter as well as '-'. The following options shall be supported: -c Precede each output line with a count of the number of times the line occurred in the input. -d Suppress the writing of lines that are not repeated in the input. -f fields Ignore the first fields fields on each input line when doing comparisons, where fields is a positive decimal integer. A field is the maximal string matched by the basic regular expression: [[:blank:]]*[^[:blank:]]* If the fields option-argument specifies more fields than appear on an input line, a null string shall be used for comparison. -s chars Ignore the first chars characters when doing comparisons, where chars shall be a positive decimal integer. If specified in conjunction with the -f option, the first chars characters after the first fields fields shall be ignored. If the chars option- argument specifies more characters than remain on an input line, a null string shall be used for comparison. -u Suppress the writing of lines that are repeated in the input.
# uniq > Output the unique lines from the given input or file. Since it does not > detect repeated lines unless they are adjacent, we need to sort them first. > More information: https://www.gnu.org/software/coreutils/uniq. * Display each line once: `sort {{path/to/file}} | uniq` * Display only unique lines: `sort {{path/to/file}} | uniq -u` * Display only duplicate lines: `sort {{path/to/file}} | uniq -d` * Display number of occurrences of each line along with that line: `sort {{path/to/file}} | uniq -c` * Display number of occurrences of each line, sorted by the most frequent: `sort {{path/to/file}} | uniq -c | sort -nr`
git-remote
Manage the set of repositories ("remotes") whose branches you track. -v, --verbose Be a little more verbose and show remote url after name. For promisor remotes, also show which filter (blob:none etc.) are configured. NOTE: This must be placed between remote and subcommand.
# git remote > Manage set of tracked repositories ("remotes"). More information: > https://git-scm.com/docs/git-remote. * Show a list of existing remotes, their names and URL: `git remote -v` * Show information about a remote: `git remote show {{remote_name}}` * Add a remote: `git remote add {{remote_name}} {{remote_url}}` * Change the URL of a remote (use `--add` to keep the existing URL): `git remote set-url {{remote_name}} {{new_url}}` * Show the URL of a remote: `git remote get-url {{remote_name}}` * Remove a remote: `git remote remove {{remote_name}}` * Rename a remote: `git remote rename {{old_name}} {{new_name}}`
systemd-path
systemd-path may be used to query system and user paths. The tool makes many of the paths described in file-hierarchy(7) available for querying. When invoked without arguments, a list of known paths and their current values is shown. When at least one argument is passed, the path with this name is queried and its value shown. The variables whose name begins with "search-" do not refer to individual paths, but instead to a list of colon-separated search paths, in their order of precedence. The following options are understood: --suffix= Printed paths are suffixed by the specified string. -h, --help Print a short help text and exit. --version Print a short version string and exit.
# systemd-path > List and query system and user paths. More information: > https://www.freedesktop.org/software/systemd/man/systemd-path.html. * Display a list of known paths and their current values: `systemd-path` * Query the specified path and display its value: `systemd-path "{{path_name}}"` * Suffix printed paths with `suffix_string`: `systemd-path --suffix {{suffix_string}}` * Print a short version string and then exit: `systemd-path --version`
whatis
Each manual page has a short description available within it. whatis searches the manual page names and displays the manual page descriptions of any name matched. name may contain wildcards (-w) or be a regular expression (-r). Using these options, it may be necessary to quote the name or escape (\) the special characters to stop the shell from interpreting them. index databases are used during the search, and are updated by the mandb program. Depending on your installation, this may be run by a periodic cron job, or may need to be run manually after new manual pages have been installed. To produce an old style text whatis database from the relative index database, issue the command: whatis -M manpath -w '*' | sort > manpath/whatis where manpath is a manual page hierarchy such as /usr/man. -d, --debug Print debugging information. -v, --verbose Print verbose warning messages. -r, --regex Interpret each name as a regular expression. If a name matches any part of a page name, a match will be made. This option causes whatis to be somewhat slower due to the nature of database searches. -w, --wildcard Interpret each name as a pattern containing shell style wildcards. For a match to be made, an expanded name must match the entire page name. This option causes whatis to be somewhat slower due to the nature of database searches. -l, --long Do not trim output to the terminal width. Normally, output will be truncated to the terminal width to avoid ugly results from poorly-written NAME sections. -s list, --sections=list, --section=list Search only the given manual sections. list is a colon- or comma-separated list of sections. If an entry in list is a simple section, for example "3", then the displayed list of descriptions will include pages in sections "3", "3perl", "3x", and so on; while if an entry in list has an extension, for example "3perl", then the list will only include pages in that exact part of the manual section. -m system[,...], --systems=system[,...] If this system has access to other operating systems' manual page names, they can be accessed using this option. To search NewOS's manual page names, use the option -m NewOS. The system specified can be a combination of comma delimited operating system names. To include a search of the native operating system's manual page names, include the system name man in the argument string. This option will override the $SYSTEM environment variable. -M path, --manpath=path Specify an alternate set of colon-delimited manual page hierarchies to search. By default, whatis uses the $MANPATH environment variable, unless it is empty or unset, in which case it will determine an appropriate manpath based on your $PATH environment variable. This option overrides the contents of $MANPATH. -L locale, --locale=locale whatis will normally determine your current locale by a call to the C function setlocale(3) which interrogates various environment variables, possibly including $LC_MESSAGES and $LANG. To temporarily override the determined value, use this option to supply a locale string directly to whatis. Note that it will not take effect until the search for pages actually begins. Output such as the help message will always be displayed in the initially determined locale. -C file, --config-file=file Use this user configuration file rather than the default of ~/.manpath. -?, --help Print a help message and exit. --usage Print a short usage message and exit. -V, --version Display version information.
# whatis > Tool that searches a set of database files containing short descriptions of > system commands for keywords. More information: > http://www.linfo.org/whatis.html. * Search for information about keyword: `whatis {{keyword}}` * Search for information about multiple keywords: `whatis {{keyword1}} {{keyword2}}`
git-grep
Look for specified patterns in the tracked files in the work tree, blobs registered in the index file, or blobs in given tree objects. Patterns are lists of one or more search expressions separated by newline characters. An empty string as search expression matches all lines. --cached Instead of searching tracked files in the working tree, search blobs registered in the index file. --no-index Search files in the current directory that is not managed by Git. --untracked In addition to searching in the tracked files in the working tree, search also in untracked files. --no-exclude-standard Also search in ignored files by not honoring the .gitignore mechanism. Only useful with --untracked. --exclude-standard Do not pay attention to ignored files specified via the .gitignore mechanism. Only useful when searching files in the current directory with --no-index. --recurse-submodules Recursively search in each submodule that is active and checked out in the repository. When used in combination with the <tree> option the prefix of all submodule output will be the name of the parent project’s <tree> object. This option has no effect if --no-index is given. -a, --text Process binary files as if they were text. --textconv Honor textconv filter settings. --no-textconv Do not honor textconv filter settings. This is the default. -i, --ignore-case Ignore case differences between the patterns and the files. -I Don’t match the pattern in binary files. --max-depth <depth> For each <pathspec> given on command line, descend at most <depth> levels of directories. A value of -1 means no limit. This option is ignored if <pathspec> contains active wildcards. In other words if "a*" matches a directory named "a*", "*" is matched literally so --max-depth is still effective. -r, --recursive Same as --max-depth=-1; this is the default. --no-recursive Same as --max-depth=0. -w, --word-regexp Match the pattern only at word boundary (either begin at the beginning of a line, or preceded by a non-word character; end at the end of a line or followed by a non-word character). -v, --invert-match Select non-matching lines. -h, -H By default, the command shows the filename for each match. -h option is used to suppress this output. -H is there for completeness and does not do anything except it overrides -h given earlier on the command line. --full-name When run from a subdirectory, the command usually outputs paths relative to the current directory. This option forces paths to be output relative to the project top directory. -E, --extended-regexp, -G, --basic-regexp Use POSIX extended/basic regexp for patterns. Default is to use basic regexp. -P, --perl-regexp Use Perl-compatible regular expressions for patterns. Support for these types of regular expressions is an optional compile-time dependency. If Git wasn’t compiled with support for them providing this option will cause it to die. -F, --fixed-strings Use fixed strings for patterns (don’t interpret pattern as a regex). -n, --line-number Prefix the line number to matching lines. --column Prefix the 1-indexed byte-offset of the first match from the start of the matching line. -l, --files-with-matches, --name-only, -L, --files-without-match Instead of showing every matched line, show only the names of files that contain (or do not contain) matches. For better compatibility with git diff, --name-only is a synonym for --files-with-matches. -O[<pager>], --open-files-in-pager[=<pager>] Open the matching files in the pager (not the output of grep). If the pager happens to be "less" or "vi", and the user specified only one pattern, the first file is positioned at the first match automatically. The pager argument is optional; if specified, it must be stuck to the option without a space. If pager is unspecified, the default pager will be used (see core.pager in git-config(1)). -z, --null Use \0 as the delimiter for pathnames in the output, and print them verbatim. Without this option, pathnames with "unusual" characters are quoted as explained for the configuration variable core.quotePath (see git-config(1)). -o, --only-matching Print only the matched (non-empty) parts of a matching line, with each such part on a separate output line. -c, --count Instead of showing every matched line, show the number of lines that match. --color[=<when>] Show colored matches. The value must be always (the default), never, or auto. --no-color Turn off match highlighting, even when the configuration file gives the default to color output. Same as --color=never. --break Print an empty line between matches from different files. --heading Show the filename above the matches in that file instead of at the start of each shown line. -p, --show-function Show the preceding line that contains the function name of the match, unless the matching line is a function name itself. The name is determined in the same way as git diff works out patch hunk headers (see Defining a custom hunk-header in gitattributes(5)). -<num>, -C <num>, --context <num> Show <num> leading and trailing lines, and place a line containing -- between contiguous groups of matches. -A <num>, --after-context <num> Show <num> trailing lines, and place a line containing -- between contiguous groups of matches. -B <num>, --before-context <num> Show <num> leading lines, and place a line containing -- between contiguous groups of matches. -W, --function-context Show the surrounding text from the previous line containing a function name up to the one before the next function name, effectively showing the whole function in which the match was found. The function names are determined in the same way as git diff works out patch hunk headers (see Defining a custom hunk-header in gitattributes(5)). -m <num>, --max-count <num> Limit the amount of matches per file. When using the -v or --invert-match option, the search stops after the specified number of non-matches. A value of -1 will return unlimited results (the default). A value of 0 will exit immediately with a non-zero status. --threads <num> Number of grep worker threads to use. See grep.threads in CONFIGURATION for more information. -f <file> Read patterns from <file>, one per line. Passing the pattern via <file> allows for providing a search pattern containing a \0. Not all pattern types support patterns containing \0. Git will error out if a given pattern type can’t support such a pattern. The --perl-regexp pattern type when compiled against the PCRE v2 backend has the widest support for these types of patterns. In versions of Git before 2.23.0 patterns containing \0 would be silently considered fixed. This was never documented, there were also odd and undocumented interactions between e.g. non-ASCII patterns containing \0 and --ignore-case. In future versions we may learn to support patterns containing \0 for more search backends, until then we’ll die when the pattern type in question doesn’t support them. -e The next parameter is the pattern. This option has to be used for patterns starting with - and should be used in scripts passing user input to grep. Multiple patterns are combined by or. --and, --or, --not, ( ... ) Specify how multiple patterns are combined using Boolean expressions. --or is the default operator. --and has higher precedence than --or. -e has to be used for all patterns. --all-match When giving multiple pattern expressions combined with --or, this flag is specified to limit the match to files that have lines to match all of them. -q, --quiet Do not output matched lines; instead, exit with status 0 when there is a match and with non-zero status when there isn’t. <tree>... Instead of searching tracked files in the working tree, search blobs in the given trees. -- Signals the end of options; the rest of the parameters are <pathspec> limiters. <pathspec>... If given, limit the search to paths matching at least one pattern. Both leading paths match and glob(7) patterns are supported. For more details about the <pathspec> syntax, see the pathspec entry in gitglossary(7).
# git-grep > Find strings inside files anywhere in a repository's history. Accepts a lot > of the same flags as regular `grep`. More information: https://git- > scm.com/docs/git-grep. * Search for a string in tracked files: `git grep {{search_string}}` * Search for a string in files matching a pattern in tracked files: `git grep {{search_string}} -- {{file_glob_pattern}}` * Search for a string in tracked files, including submodules: `git grep --recurse-submodules {{search_string}}` * Search for a string at a specific point in history: `git grep {{search_string}} {{HEAD~2}}` * Search for a string across all branches: `git grep {{search_string}} $(git rev-list --all)`
touch
Update the access and modification times of each FILE to the current time. A FILE argument that does not exist is created empty, unless -c or -h is supplied. A FILE argument string of - is handled specially and causes touch to change the times of the file associated with standard output. Mandatory arguments to long options are mandatory for short options too. -a change only the access time -c, --no-create do not create any files -d, --date=STRING parse STRING and use it instead of current time -f (ignored) -h, --no-dereference affect each symbolic link instead of any referenced file (useful only on systems that can change the timestamps of a symlink) -m change only the modification time -r, --reference=FILE use this file's times instead of current time -t STAMP use [[CC]YY]MMDDhhmm[.ss] instead of current time --time=WORD change the specified time: WORD is access, atime, or use: equivalent to -a WORD is modify or mtime: equivalent to -m --help display this help and exit --version output version information and exit Note that the -d and -t options accept different time-date formats.
# touch > Create files and set access/modification times. More information: > https://manned.org/man/freebsd-13.1/touch. * Create specific files: `touch {{path/to/file1 path/to/file2 ...}}` * Set the file [a]ccess or [m]odification times to the current one and don't [c]reate file if it doesn't exist: `touch -c -{{a|m}} {{path/to/file1 path/to/file2 ...}}` * Set the file [t]ime to a specific value and don't [c]reate file if it doesn't exist: `touch -c -t {{YYYYMMDDHHMM.SS}} {{path/to/file1 path/to/file2 ...}}` * Set the file time of a specific file to the time of anothe[r] file and don't [c]reate file if it doesn't exist: `touch -c -r {{~/.emacs}} {{path/to/file1 path/to/file2 ...}}`
vdir
List information about the FILEs (the current directory by default). Sort entries alphabetically if none of -cftuvSUX nor --sort is specified. Mandatory arguments to long options are mandatory for short options too. -a, --all do not ignore entries starting with . -A, --almost-all do not list implied . and .. --author with -l, print the author of each file -b, --escape print C-style escapes for nongraphic characters --block-size=SIZE with -l, scale sizes by SIZE when printing them; e.g., '--block-size=M'; see SIZE format below -B, --ignore-backups do not list implied entries ending with ~ -c with -lt: sort by, and show, ctime (time of last change of file status information); with -l: show ctime and sort by name; otherwise: sort by ctime, newest first -C list entries by columns --color[=WHEN] color the output WHEN; more info below -d, --directory list directories themselves, not their contents -D, --dired generate output designed for Emacs' dired mode -f list all entries in directory order -F, --classify[=WHEN] append indicator (one of */=>@|) to entries WHEN --file-type likewise, except do not append '*' --format=WORD across -x, commas -m, horizontal -x, long -l, single-column -1, verbose -l, vertical -C --full-time like -l --time-style=full-iso -g like -l, but do not list owner --group-directories-first group directories before files; can be augmented with a --sort option, but any use of --sort=none (-U) disables grouping -G, --no-group in a long listing, don't print group names -h, --human-readable with -l and -s, print sizes like 1K 234M 2G etc. --si likewise, but use powers of 1000 not 1024 -H, --dereference-command-line follow symbolic links listed on the command line --dereference-command-line-symlink-to-dir follow each command line symbolic link that points to a directory --hide=PATTERN do not list implied entries matching shell PATTERN (overridden by -a or -A) --hyperlink[=WHEN] hyperlink file names WHEN --indicator-style=WORD append indicator with style WORD to entry names: none (default), slash (-p), file-type (--file-type), classify (-F) -i, --inode print the index number of each file -I, --ignore=PATTERN do not list implied entries matching shell PATTERN -k, --kibibytes default to 1024-byte blocks for file system usage; used only with -s and per directory totals -l use a long listing format -L, --dereference when showing file information for a symbolic link, show information for the file the link references rather than for the link itself -m fill width with a comma separated list of entries -n, --numeric-uid-gid like -l, but list numeric user and group IDs -N, --literal print entry names without quoting -o like -l, but do not list group information -p, --indicator-style=slash append / indicator to directories -q, --hide-control-chars print ? instead of nongraphic characters --show-control-chars show nongraphic characters as-is (the default, unless program is 'ls' and output is a terminal) -Q, --quote-name enclose entry names in double quotes --quoting-style=WORD use quoting style WORD for entry names: literal, locale, shell, shell-always, shell-escape, shell-escape-always, c, escape (overrides QUOTING_STYLE environment variable) -r, --reverse reverse order while sorting -R, --recursive list subdirectories recursively -s, --size print the allocated size of each file, in blocks -S sort by file size, largest first --sort=WORD sort by WORD instead of name: none (-U), size (-S), time (-t), version (-v), extension (-X), width --time=WORD select which timestamp used to display or sort; access time (-u): atime, access, use; metadata change time (-c): ctime, status; modified time (default): mtime, modification; birth time: birth, creation; with -l, WORD determines which time to show; with --sort=time, sort by WORD (newest first) --time-style=TIME_STYLE time/date format with -l; see TIME_STYLE below -t sort by time, newest first; see --time -T, --tabsize=COLS assume tab stops at each COLS instead of 8 -u with -lt: sort by, and show, access time; with -l: show access time and sort by name; otherwise: sort by access time, newest first -U do not sort; list entries in directory order -v natural sort of (version) numbers within text -w, --width=COLS set output width to COLS. 0 means no limit -x list entries by lines instead of by columns -X sort alphabetically by entry extension -Z, --context print any security context of each file --zero end each output line with NUL, not newline -1 list one file per line --help display this help and exit --version output version information and exit The SIZE argument is an integer and optional unit (example: 10K is 10*1024). Units are K,M,G,T,P,E,Z,Y,R,Q (powers of 1024) or KB,MB,... (powers of 1000). Binary prefixes can be used, too: KiB=K, MiB=M, and so on. The TIME_STYLE argument can be full-iso, long-iso, iso, locale, or +FORMAT. FORMAT is interpreted like in date(1). If FORMAT is FORMAT1<newline>FORMAT2, then FORMAT1 applies to non-recent files and FORMAT2 to recent files. TIME_STYLE prefixed with 'posix-' takes effect only outside the POSIX locale. Also the TIME_STYLE environment variable sets the default style to use. The WHEN argument defaults to 'always' and can also be 'auto' or 'never'. Using color to distinguish file types is disabled both by default and with --color=never. With --color=auto, ls emits color codes only when standard output is connected to a terminal. The LS_COLORS environment variable can change the settings. Use the dircolors(1) command to set it. Exit status: 0 if OK, 1 if minor problems (e.g., cannot access subdirectory), 2 if serious trouble (e.g., cannot access command-line argument).
# vdir > List directory contents. Drop-in replacement for `ls -l`. More information: > https://www.gnu.org/software/coreutils/vdir. * List files and directories in the current directory, one per line, with details: `vdir` * List with sizes displayed in human-readable units (KB, MB, GB): `vdir -h` * List including hidden files (starting with a dot): `vdir -a` * List files and directories sorting entries by size (largest first): `vdir -S` * List files and directories sorting entries by modification time (newest first): `vdir -t` * List grouping directories first: `vdir --group-directories-first` * Recursively list all files and directories in a specific directory: `vdir --recursive {{path/to/directory}}`
pmap
The pmap command reports the memory map of a process or processes. -x, --extended Show the extended format. -d, --device Show the device format. -q, --quiet Do not display some header or footer lines. -A, --range low,high Limit results to the given range to low and high address range. Notice that the low and high arguments are single string separated with comma. -X Show even more details than the -x option. WARNING: format changes according to /proc/PID/smaps -XX Show everything the kernel provides -p, --show-path Show full path to files in the mapping column -c, --read-rc Read the default configuration -C, --read-rc-from file Read the configuration from file -n, --create-rc Create new default configuration -N, --create-rc-to file Create new configuration to file -h, --help Display help text and exit. -V, --version Display version information and exit.
# pmap > Report memory map of a process or processes. More information: > https://manned.org/pmap. * Print memory map for a specific process id (PID): `pmap {{pid}}` * Show the extended format: `pmap --extended {{pid}}` * Show the device format: `pmap --device {{pid}}` * Limit results to a memory address range specified by `low` and `high`: `pmap --range {{low}},{{high}}` * Print memory maps for multiple processes: `pmap {{pid1 pid2 ...}}`
killall
killall sends a signal to all processes running any of the specified commands. If no signal name is specified, SIGTERM is sent. Signals can be specified either by name (e.g. -HUP or -SIGHUP) or by number (e.g. -1) or by option -s. If the command name is not regular expression (option -r) and contains a slash (/), processes executing that particular file will be selected for killing, independent of their name. killall returns a zero return code if at least one process has been killed for each listed command, or no commands were listed and at least one process matched the -u and -Z search criteria. killall returns non-zero otherwise. A killall process never kills itself (but may kill other killall processes). -e, --exact Require an exact match for very long names. If a command name is longer than 15 characters, the full name may be unavailable (i.e. it is swapped out). In this case, killall will kill everything that matches within the first 15 characters. With -e, such entries are skipped. killall prints a message for each skipped entry if -v is specified in addition to -e. -I, --ignore-case Do case insensitive process name match. -g, --process-group Kill the process group to which the process belongs. The kill signal is only sent once per group, even if multiple processes belonging to the same process group were found. -i, --interactive Interactively ask for confirmation before killing. -l, --list List all known signal names. -n, --ns Match against the PID namespace of the given PID. The default is to match against all namespaces. -o, --older-than Match only processes that are older (started before) the time specified. The time is specified as a float then a unit. The units are s,m,h,d,w,M,y for seconds, minutes, hours, days, weeks, months and years respectively. -q, --quiet Do not complain if no processes were killed. -r, --regexp Interpret process name pattern as a POSIX extended regular expression, per regex(3). -s, --signal, -SIGNAL Send this signal instead of SIGTERM. -u, --user Kill only processes the specified user owns. Command names are optional. -v, --verbose Report if the signal was successfully sent. -V, --version Display version information. -w, --wait Wait for all killed processes to die. killall checks once per second if any of the killed processes still exist and only returns if none are left. Note that killall may wait forever if the signal was ignored, had no effect, or if the process stays in zombie state. -y, --younger-than Match only processes that are younger (started after) the time specified. The time is specified as a float then a unit. The units are s,m,h,d,w,M,y for seconds, minutes, hours, days, weeks, Months and years respectively. -Z, --context Specify security context: kill only processes having security context that match with given extended regular expression pattern. Must precede other arguments on the command line. Command names are optional.
# killall > Send kill signal to all instances of a process by name (must be exact name). > All signals except SIGKILL and SIGSTOP can be intercepted by the process, > allowing a clean exit. More information: https://manned.org/killall. * Terminate a process using the default SIGTERM (terminate) signal: `killall {{process_name}}` * [l]ist available signal names (to be used without the 'SIG' prefix): `killall -l` * Interactively ask for confirmation before termination: `killall -i {{process_name}}` * Terminate a process using the SIGINT (interrupt) signal, which is the same signal sent by pressing `Ctrl + C`: `killall -INT {{process_name}}` * Force kill a process: `killall -KILL {{process_name}}`
who
Print information about users who are currently logged in. -a, --all same as -b -d --login -p -r -t -T -u -b, --boot time of last system boot -d, --dead print dead processes -H, --heading print line of column headings -l, --login print system login processes --lookup attempt to canonicalize hostnames via DNS -m only hostname and user associated with stdin -p, --process print active processes spawned by init -q, --count all login names and number of users logged on -r, --runlevel print current runlevel -s, --short print only name, line, and time (default) -t, --time print last system clock change -T, -w, --mesg add user's message status as +, - or ? -u, --users list users logged in --message same as -T --writable same as -T --help display this help and exit --version output version information and exit If FILE is not specified, use /var/run/utmp. /var/log/wtmp as FILE is common. If ARG1 ARG2 given, -m presumed: 'am i' or 'mom likes' are usual.
# who > Display who is logged in and related data (processes, boot time). More > information: https://www.gnu.org/software/coreutils/who. * Display the username, line, and time of all currently logged-in sessions: `who` * Display information only for the current terminal session: `who am i` * Display all available information: `who -a` * Display all available information with table headers: `who -a -H`
mesg
The mesg utility is invoked by a user to control write access others have to the terminal device associated with standard error output. If write access is allowed, then programs such as talk(1) and write(1) may display messages on the terminal. Traditionally, write access is allowed by default. However, as users become more conscious of various security risks, there is a trend to remove write access by default, at least for the primary login shell. To make sure your ttys are set the way you want them to be set, mesg should be executed in your login scripts. The mesg utility silently exits with error status 2 if not executed on a terminal. In this case executing mesg is pointless. The command line option --verbose forces mesg to print a warning in this situation. This behaviour has been introduced in version 2.33. -v, --verbose Explain what is being done. -h, --help Display help text and exit. -V, --version Print version and exit.
# mesg > Check or set a terminal's ability to receive messages from other users, > usually from the write command. See also `write`. More information: > https://manned.org/mesg. * Check terminal's openness to write messages: `mesg` * Disable receiving messages from the write command: `mesg n` * Enable receiving messages from the write command: `mesg y`
gcov
gcov is a test coverage program. Use it in concert with GCC to analyze your programs to help create more efficient, faster running code and to discover untested parts of your program. You can use gcov as a profiling tool to help discover where your optimization efforts will best affect your code. You can also use gcov along with the other profiling tool, gprof, to assess which parts of your code use the greatest amount of computing time. Profiling tools help you analyze your code's performance. Using a profiler such as gcov or gprof, you can find out some basic performance statistics, such as: * how often each line of code executes * what lines of code are actually executed * how much computing time each section of code uses Once you know these things about how your code works when compiled, you can look at each module to see which modules should be optimized. gcov helps you determine where to work on optimization. Software developers also use coverage testing in concert with testsuites, to make sure software is actually good enough for a release. Testsuites can verify that a program works as expected; a coverage program tests to see how much of the program is exercised by the testsuite. Developers can then determine what kinds of test cases need to be added to the testsuites to create both better testing and a better final product. You should compile your code without optimization if you plan to use gcov because the optimization, by combining some lines of code into one function, may not give you as much information as you need to look for `hot spots' where the code is using a great deal of computer time. Likewise, because gcov accumulates statistics by line (at the lowest resolution), it works best with a programming style that places only one statement on each line. If you use complicated macros that expand to loops or to other control structures, the statistics are less helpful---they only report on the line where the macro call appears. If your complex macros behave like functions, you can replace them with inline functions to solve this problem. gcov creates a logfile called sourcefile.gcov which indicates how many times each line of a source file sourcefile.c has executed. You can use these logfiles along with gprof to aid in fine-tuning the performance of your programs. gprof gives timing information you can use along with the information you get from gcov. gcov works only on code compiled with GCC. It is not compatible with any other profiling or test coverage mechanism. -a --all-blocks Write individual execution counts for every basic block. Normally gcov outputs execution counts only for the main blocks of a line. With this option you can determine if blocks within a single line are not being executed. -b --branch-probabilities Write branch frequencies to the output file, and write branch summary info to the standard output. This option allows you to see how often each branch in your program was taken. Unconditional branches will not be shown, unless the -u option is given. -c --branch-counts Write branch frequencies as the number of branches taken, rather than the percentage of branches taken. -d --display-progress Display the progress on the standard output. -f --function-summaries Output summaries for each function in addition to the file level summary. -h --help Display help about using gcov (on the standard output), and exit without doing any further processing. -i --json-format Output gcov file in an easy-to-parse JSON intermediate format which does not require source code for generation. The JSON file is compressed with gzip compression algorithm and the files have .gcov.json.gz extension. Structure of the JSON is following: { "current_working_directory": <current_working_directory>, "data_file": <data_file>, "format_version": <format_version>, "gcc_version": <gcc_version> "files": [<file>] } Fields of the root element have following semantics: * current_working_directory: working directory where a compilation unit was compiled * data_file: name of the data file (GCDA) * format_version: semantic version of the format * gcc_version: version of the GCC compiler Each file has the following form: { "file": <file_name>, "functions": [<function>], "lines": [<line>] } Fields of the file element have following semantics: * file_name: name of the source file Each function has the following form: { "blocks": <blocks>, "blocks_executed": <blocks_executed>, "demangled_name": "<demangled_name>, "end_column": <end_column>, "end_line": <end_line>, "execution_count": <execution_count>, "name": <name>, "start_column": <start_column> "start_line": <start_line> } Fields of the function element have following semantics: * blocks: number of blocks that are in the function * blocks_executed: number of executed blocks of the function * demangled_name: demangled name of the function * end_column: column in the source file where the function ends * end_line: line in the source file where the function ends * execution_count: number of executions of the function * name: name of the function * start_column: column in the source file where the function begins * start_line: line in the source file where the function begins Note that line numbers and column numbers number from 1. In the current implementation, start_line and start_column do not include any template parameters and the leading return type but that this is likely to be fixed in the future. Each line has the following form: { "branches": [<branch>], "count": <count>, "line_number": <line_number>, "unexecuted_block": <unexecuted_block> "function_name": <function_name>, } Branches are present only with -b option. Fields of the line element have following semantics: * count: number of executions of the line * line_number: line number * unexecuted_block: flag whether the line contains an unexecuted block (not all statements on the line are executed) * function_name: a name of a function this line belongs to (for a line with an inlined statements can be not set) Each branch has the following form: { "count": <count>, "fallthrough": <fallthrough>, "throw": <throw> } Fields of the branch element have following semantics: * count: number of executions of the branch * fallthrough: true when the branch is a fall through branch * throw: true when the branch is an exceptional branch -j --human-readable Write counts in human readable format (like 24.6k). -k --use-colors Use colors for lines of code that have zero coverage. We use red color for non-exceptional lines and cyan for exceptional. Same colors are used for basic blocks with -a option. -l --long-file-names Create long file names for included source files. For example, if the header file x.h contains code, and was included in the file a.c, then running gcov on the file a.c will produce an output file called a.c
# gcov > Code coverage analysis and profiling tool that discovers untested parts of a > program. Also displays a copy of source code annotated with execution > frequencies of code segments. More information: > https://gcc.gnu.org/onlinedocs/gcc/Invoking-Gcov.html. * Generate a coverage report named `file.cpp.gcov`: `gcov {{path/to/file.cpp}}` * Write individual execution counts for every basic block: `gcov --all-blocks {{path/to/file.cpp}}` * Write branch frequencies to the output file and print summary information to `stdout` as a percentage: `gcov --branch-probabilities {{path/to/file.cpp}}` * Write branch frequencies as the number of branches taken, rather than the percentage: `gcov --branch-counts {{path/to/file.cpp}}` * Do not create a `gcov` output file: `gcov --no-output {{path/to/file.cpp}}` * Write file level as well as function level summaries: `gcov --function-summaries {{path/to/file.cpp}}`
ltrace
ltrace is a program that simply runs the specified command until it exits. It intercepts and records the dynamic library calls which are called by the executed process and the signals which are received by that process. It can also intercept and print the system calls executed by the program. Its use is very similar to strace(1). ltrace shows parameters of invoked functions and system calls. To determine what arguments each function has, it needs external declaration of function prototypes. Those are stored in files called prototype libraries--see ltrace.conf(5) for details on the syntax of these files. See the section PROTOTYPE LIBRARY DISCOVERY to learn how ltrace finds prototype libraries. -a, --align column Align return values in a specific column (default column is 5/8 of screen width). -A maxelts Maximum number of array elements to print before suppressing the rest with an ellipsis ("..."). This also limits number of recursive structure expansions. -b, --no-signals Disable printing of signals received by the traced process. -c Count time and calls for each library call and report a summary on program exit. -C, --demangle Decode (demangle) low-level symbol names into user-level names. Besides removing any initial underscore prefix used by the system, this makes C++ function names readable. -D, --debug mask Show debugging output of ltrace itself. mask is a number describing which debug messages should be displayed. Use the option -Dh to see what can be used, but note that currently the only reliable debugmask is 77, which shows all debug messages. -e filter A qualifying expression which modifies which library calls (i.e. calls done through PLT slots, which are typically calls from the main binary to a library, or inter-library calls) to trace. Usage examples and the syntax description appear below in sections FILTER SPECIFICATIONS and FILTER EXPRESSIONS. If more than one -e option appears on the command line, the library calls that match any of them are traced. If no -e is given, @MAIN is assumed as a default. -f Trace child processes as they are created by currently traced processes as a result of the fork(2) or clone(2) system calls. The new process is attached immediately. -F, --config pathlist Contains a colon-separated list of paths. If a path refers to a directory, that directory is considered when prototype libraries are searched (see the section PROTOTYPE LIBRARY DISCOVERY). If it refers to a file, that file is imported implicitly to all loaded prototype libraries. -h, --help Show a summary of the options to ltrace and exit. -i Print the instruction pointer at the time of the library call. -l, --library library_pattern Display only calls to functions implemented by libraries that match library_pattern. This is as if you specified one -e for every symbol implemented in a library specified by library_pattern. Multiple library patters can be specified with several instances of this option. Usage examples and the syntax description of library_pattern appear below in sections FILTER SPECIFICATIONS and FILTER EXPRESSIONS. Note that while this option selects calls that might be directed to the selected libraries, there's no actual guarantee that the call won't be directed elsewhere due to e.g. LD_PRELOAD or simply dependency ordering. If you want to make sure that symbols in given library are actually called, use -x @library_pattern instead. -L When no -e option is given, don't assume the default action of @MAIN. In practice this means that library calls will not be traced. -n, --indent nr Indent trace output by nr spaces for each level of call nesting. Using this option makes the program flow visualization easy to follow. This indents uselessly also functions that never return, such as service functions for throwing exceptions in the C++ runtime. -o, --output filename Write the trace output to the file filename rather than to stderr. -p pid Attach to the process with the process ID pid and begin tracing. This option can be used together with passing a command to execute. It is possible to attach to several processes by passing more than one option -p. -r Print a relative timestamp with each line of the trace. This records the time difference between the beginning of successive lines. -s strsize Specify the maximum string size to print (the default is 32). -S Display system calls as well as library calls -t Prefix each line of the trace with the time of day. -tt If given twice, the time printed will include the microseconds. -ttt If given thrice, the time printed will include the microseconds and the leading portion will be printed as the number of seconds since the epoch. -T Show the time spent inside each call. This records the time difference between the beginning and the end of each call. -u username Run command with the userid, groupid and supplementary groups of username. This option is only useful when running as root and enables the correct execution of setuid and/or setgid binaries. -w, --where nr Show backtrace of nr stack frames for each traced function. This option enabled only if elfutils or libunwind support was enabled at compile time. -x filter A qualifying expression which modifies which symbol table entry points to trace (those are typically calls inside a library or main binary, though PLT calls, traced by -e, land on entry points as well). Usage examples and the syntax description appear below in sections FILTER SPECIFICATIONS and FILTER EXPRESSIONS. If more than one -x option appears on the command line, the symbols that match any of them are traced. No entry points are traced if no -x is given. -V, --version Show the version number of ltrace and exit.
# ltrace > Display dynamic library calls of a process. More information: > https://manned.org/ltrace. * Print (trace) library calls of a program binary: `ltrace ./{{program}}` * Count library calls. Print a handy summary at the bottom: `ltrace -c {{path/to/program}}` * Trace calls to malloc and free, omit those done by libc: `ltrace -e malloc+free-@libc.so* {{path/to/program}}` * Write to file instead of terminal: `ltrace -o {{file}} {{path/to/program}}`
awk
The awk utility shall execute programs written in the awk programming language, which is specialized for textual data manipulation. An awk program is a sequence of patterns and corresponding actions. When input is read that matches a pattern, the action associated with that pattern is carried out. Input shall be interpreted as a sequence of records. By default, a record is a line, less its terminating <newline>, but this can be changed by using the RS built-in variable. Each record of input shall be matched in turn against each pattern in the program. For each pattern matched, the associated action shall be executed. The awk utility shall interpret each input record as a sequence of fields where, by default, a field is a string of non-<blank> non-<newline> characters. This default <blank> and <newline> field delimiter can be changed by using the FS built-in variable or the -F sepstring option. The awk utility shall denote the first field in a record $1, the second $2, and so on. The symbol $0 shall refer to the entire record; setting any other field causes the re-evaluation of $0. Assigning to $0 shall reset the values of all other fields and the NF built-in variable. The awk utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following options shall be supported: -F sepstring Define the input field separator. This option shall be equivalent to: -v FS=sepstring except that if -F sepstring and -v FS=sepstring are both used, it is unspecified whether the FS assignment resulting from -F sepstring is processed in command line order or is processed after the last -v FS=sepstring. See the description of the FS built-in variable, and how it is used, in the EXTENDED DESCRIPTION section. -f progfile Specify the pathname of the file progfile containing an awk program. A pathname of '-' shall denote the standard input. If multiple instances of this option are specified, the concatenation of the files specified as progfile in the order specified shall be the awk program. The awk program can alternatively be specified in the command line as a single argument. -v assignment The application shall ensure that the assignment argument is in the same form as an assignment operand. The specified variable assignment shall occur prior to executing the awk program, including the actions associated with BEGIN patterns (if any). Multiple occurrences of this option can be specified.
# awk > A versatile programming language for working on files. More information: > https://github.com/onetrueawk/awk. * Print the fifth column (a.k.a. field) in a space-separated file: `awk '{print $5}' {{path/to/file}}` * Print the second column of the lines containing "foo" in a space-separated file: `awk '/{{foo}}/ {print $2}' {{path/to/file}}` * Print the last column of each line in a file, using a comma (instead of space) as a field separator: `awk -F ',' '{print $NF}' {{path/to/file}}` * Sum the values in the first column of a file and print the total: `awk '{s+=$1} END {print s}' {{path/to/file}}` * Print every third line starting from the first line: `awk 'NR%3==1' {{path/to/file}}` * Print different values based on conditions: `awk '{if ($1 == "foo") print "Exact match foo"; else if ($1 ~ "bar") print "Partial match bar"; else print "Baz"}' {{path/to/file}}` * Print all lines where the 10th column value equals the specified value: `awk '($10 == value)'` * Print all the lines which the 10th column value is between a min and a max: `awk '($10 >= min_value && $10 <= max_value)'`
git-cherry-pick
Given one or more existing commits, apply the change each one introduces, recording a new commit for each. This requires your working tree to be clean (no modifications from the HEAD commit). When it is not obvious how to apply a change, the following happens: 1. The current branch and HEAD pointer stay at the last commit successfully made. 2. The CHERRY_PICK_HEAD ref is set to point at the commit that introduced the change that is difficult to apply. 3. Paths in which the change applied cleanly are updated both in the index file and in your working tree. 4. For conflicting paths, the index file records up to three versions, as described in the "TRUE MERGE" section of git-merge(1). The working tree files will include a description of the conflict bracketed by the usual conflict markers <<<<<<< and >>>>>>>. 5. No other modifications are made. See git-merge(1) for some hints on resolving such conflicts. <commit>... Commits to cherry-pick. For a more complete list of ways to spell commits, see gitrevisions(7). Sets of commits can be passed but no traversal is done by default, as if the --no-walk option was specified, see git-rev-list(1). Note that specifying a range will feed all <commit>... arguments to a single revision walk (see a later example that uses maint master..next). -e, --edit With this option, git cherry-pick will let you edit the commit message prior to committing. --cleanup=<mode> This option determines how the commit message will be cleaned up before being passed on to the commit machinery. See git-commit(1) for more details. In particular, if the <mode> is given a value of scissors, scissors will be appended to MERGE_MSG before being passed on in the case of a conflict. -x When recording the commit, append a line that says "(cherry picked from commit ...)" to the original commit message in order to indicate which commit this change was cherry-picked from. This is done only for cherry picks without conflicts. Do not use this option if you are cherry-picking from your private branch because the information is useless to the recipient. If on the other hand you are cherry-picking between two publicly visible branches (e.g. backporting a fix to a maintenance branch for an older release from a development branch), adding this information can be useful. -r It used to be that the command defaulted to do -x described above, and -r was to disable it. Now the default is not to do -x so this option is a no-op. -m <parent-number>, --mainline <parent-number> Usually you cannot cherry-pick a merge because you do not know which side of the merge should be considered the mainline. This option specifies the parent number (starting from 1) of the mainline and allows cherry-pick to replay the change relative to the specified parent. -n, --no-commit Usually the command automatically creates a sequence of commits. This flag applies the changes necessary to cherry-pick each named commit to your working tree and the index, without making any commit. In addition, when this option is used, your index does not have to match the HEAD commit. The cherry-pick is done against the beginning state of your index. This is useful when cherry-picking more than one commits' effect to your index in a row. -s, --signoff Add a Signed-off-by trailer at the end of the commit message. See the signoff option in git-commit(1) for more information. -S[<keyid>], --gpg-sign[=<keyid>], --no-gpg-sign GPG-sign commits. The keyid argument is optional and defaults to the committer identity; if specified, it must be stuck to the option without a space. --no-gpg-sign is useful to countermand both commit.gpgSign configuration variable, and earlier --gpg-sign. --ff If the current HEAD is the same as the parent of the cherry-pick’ed commit, then a fast forward to this commit will be performed. --allow-empty By default, cherry-picking an empty commit will fail, indicating that an explicit invocation of git commit --allow-empty is required. This option overrides that behavior, allowing empty commits to be preserved automatically in a cherry-pick. Note that when "--ff" is in effect, empty commits that meet the "fast-forward" requirement will be kept even without this option. Note also, that use of this option only keeps commits that were initially empty (i.e. the commit recorded the same tree as its parent). Commits which are made empty due to a previous commit are dropped. To force the inclusion of those commits use --keep-redundant-commits. --allow-empty-message By default, cherry-picking a commit with an empty message will fail. This option overrides that behavior, allowing commits with empty messages to be cherry picked. --keep-redundant-commits If a commit being cherry picked duplicates a commit already in the current history, it will become empty. By default these redundant commits cause cherry-pick to stop so the user can examine the commit. This option overrides that behavior and creates an empty commit object. Implies --allow-empty. --strategy=<strategy> Use the given merge strategy. Should only be used once. See the MERGE STRATEGIES section in git-merge(1) for details. -X<option>, --strategy-option=<option> Pass the merge strategy-specific option through to the merge strategy. See git-merge(1) for details. --rerere-autoupdate, --no-rerere-autoupdate After the rerere mechanism reuses a recorded resolution on the current conflict to update the files in the working tree, allow it to also update the index with the result of resolution. --no-rerere-autoupdate is a good way to double-check what rerere did and catch potential mismerges, before committing the result to the index with a separate git add.
# git cherry-pick > Apply the changes introduced by existing commits to the current branch. To > apply changes to another branch, first use `git checkout` to switch to the > desired branch. More information: https://git-scm.com/docs/git-cherry-pick. * Apply a commit to the current branch: `git cherry-pick {{commit}}` * Apply a range of commits to the current branch (see also `git rebase --onto`): `git cherry-pick {{start_commit}}~..{{end_commit}}` * Apply multiple (non-sequential) commits to the current branch: `git cherry-pick {{commit_1}} {{commit_2}}` * Add the changes of a commit to the working directory, without creating a commit: `git cherry-pick --no-commit {{commit}}`
login
The login program is used to establish a new session with the system. It is normally invoked automatically by responding to the login: prompt on the user's terminal. login may be special to the shell and may not be invoked as a sub-process. When called from a shell, login should be executed as exec login which will cause the user to exit from the current shell (and thus will prevent the new logged in user to return to the session of the caller). Attempting to execute login from any shell but the login shell will produce an error message. The user is then prompted for a password, where appropriate. Echoing is disabled to prevent revealing the password. Only a small number of password failures are permitted before login exits and the communications link is severed. If password aging has been enabled for your account, you may be prompted for a new password before proceeding. You will be forced to provide your old password and the new password before continuing. Please refer to passwd(1) for more information. Your user and group ID will be set according to their values in the /etc/passwd file. The value for $HOME, $SHELL, $PATH, $LOGNAME, and $MAIL are set according to the appropriate fields in the password entry. Ulimit, umask and nice values may also be set according to entries in the GECOS field. On some installations, the environmental variable $TERM will be initialized to the terminal type on your tty line, as specified in /etc/ttytype. An initialization script for your command interpreter may also be executed. Please see the appropriate manual section for more information on this function. A subsystem login is indicated by the presence of a "*" as the first character of the login shell. The given home directory will be used as the root of a new file system which the user is actually logged into. The login program is NOT responsible for removing users from the utmp file. It is the responsibility of getty(8) and init(8) to clean up apparent ownership of a terminal session. If you use login from the shell prompt without exec, the user you use will continue to appear to be logged in even after you log out of the "subsession". -f Do not perform authentication, user is preauthenticated. Note: In that case, username is mandatory. -h Name of the remote host for this login. -p Preserve environment. -r Perform autologin protocol for rlogin. The -r, -h and -f options are only used when login is invoked by root.
# login > Initiates a session for a user. More information: https://manned.org/login. * Log in as a user: `login {{user}}` * Log in as user without authentication if user is preauthenticated: `login -f {{user}}` * Log in as user and preserve environment: `login -p {{user}}` * Log in as a user on a remote host: `login -h {{host}} {{user}}`
git-branch
If --list is given, or if there are no non-option arguments, existing branches are listed; the current branch will be highlighted in green and marked with an asterisk. Any branches checked out in linked worktrees will be highlighted in cyan and marked with a plus sign. Option -r causes the remote-tracking branches to be listed, and option -a shows both local and remote branches. If a <pattern> is given, it is used as a shell wildcard to restrict the output to matching branches. If multiple patterns are given, a branch is shown if it matches any of the patterns. Note that when providing a <pattern>, you must use --list; otherwise the command may be interpreted as branch creation. With --contains, shows only the branches that contain the named commit (in other words, the branches whose tip commits are descendants of the named commit), --no-contains inverts it. With --merged, only branches merged into the named commit (i.e. the branches whose tip commits are reachable from the named commit) will be listed. With --no-merged only branches not merged into the named commit will be listed. If the <commit> argument is missing it defaults to HEAD (i.e. the tip of the current branch). The command’s second form creates a new branch head named <branchname> which points to the current HEAD, or <start-point> if given. As a special case, for <start-point>, you may use "A...B" as a shortcut for the merge base of A and B if there is exactly one merge base. You can leave out at most one of A and B, in which case it defaults to HEAD. Note that this will create the new branch, but it will not switch the working tree to it; use "git switch <newbranch>" to switch to the new branch. When a local branch is started off a remote-tracking branch, Git sets up the branch (specifically the branch.<name>.remote and branch.<name>.merge configuration entries) so that git pull will appropriately merge from the remote-tracking branch. This behavior may be changed via the global branch.autoSetupMerge configuration flag. That setting can be overridden by using the --track and --no-track options, and changed later using git branch --set-upstream-to. With a -m or -M option, <oldbranch> will be renamed to <newbranch>. If <oldbranch> had a corresponding reflog, it is renamed to match <newbranch>, and a reflog entry is created to remember the branch renaming. If <newbranch> exists, -M must be used to force the rename to happen. The -c and -C options have the exact same semantics as -m and -M, except instead of the branch being renamed, it will be copied to a new name, along with its config and reflog. With a -d or -D option, <branchname> will be deleted. You may specify more than one branch for deletion. If the branch currently has a reflog then the reflog will also be deleted. Use -r together with -d to delete remote-tracking branches. Note, that it only makes sense to delete remote-tracking branches if they no longer exist in the remote repository or if git fetch was configured not to fetch them again. See also the prune subcommand of git-remote(1) for a way to clean up all obsolete remote-tracking branches. -d, --delete Delete a branch. The branch must be fully merged in its upstream branch, or in HEAD if no upstream was set with --track or --set-upstream-to. -D Shortcut for --delete --force. --create-reflog Create the branch’s reflog. This activates recording of all changes made to the branch ref, enabling use of date based sha1 expressions such as "<branchname>@{yesterday}". Note that in non-bare repositories, reflogs are usually enabled by default by the core.logAllRefUpdates config option. The negated form --no-create-reflog only overrides an earlier --create-reflog, but currently does not negate the setting of core.logAllRefUpdates. -f, --force Reset <branchname> to <start-point>, even if <branchname> exists already. Without -f, git branch refuses to change an existing branch. In combination with -d (or --delete), allow deleting the branch irrespective of its merged status, or whether it even points to a valid commit. In combination with -m (or --move), allow renaming the branch even if the new branch name already exists, the same applies for -c (or --copy). Note that git branch -f <branchname> [<start-point>], even with -f, refuses to change an existing branch <branchname> that is checked out in another worktree linked to the same repository. -m, --move Move/rename a branch, together with its config and reflog. -M Shortcut for --move --force. -c, --copy Copy a branch, together with its config and reflog. -C Shortcut for --copy --force. --color[=<when>] Color branches to highlight current, local, and remote-tracking branches. The value must be always (the default), never, or auto. --no-color Turn off branch colors, even when the configuration file gives the default to color output. Same as --color=never. -i, --ignore-case Sorting and filtering branches are case insensitive. --omit-empty Do not print a newline after formatted refs where the format expands to the empty string. --column[=<options>], --no-column Display branch listing in columns. See configuration variable column.branch for option syntax. --column and --no-column without options are equivalent to always and never respectively. This option is only applicable in non-verbose mode. -r, --remotes List or delete (if used with -d) the remote-tracking branches. Combine with --list to match the optional pattern(s). -a, --all List both remote-tracking branches and local branches. Combine with --list to match optional pattern(s). -l, --list List branches. With optional <pattern>..., e.g. git branch --list 'maint-*', list only the branches that match the pattern(s). --show-current Print the name of the current branch. In detached HEAD state, nothing is printed. -v, -vv, --verbose When in list mode, show sha1 and commit subject line for each head, along with relationship to upstream branch (if any). If given twice, print the path of the linked worktree (if any) and the name of the upstream branch, as well (see also git remote show <remote>). Note that the current worktree’s HEAD will not have its path printed (it will always be your current directory). -q, --quiet Be more quiet when creating or deleting a branch, suppressing non-error messages. --abbrev=<n> In the verbose listing that show the commit object name, show the shortest prefix that is at least <n> hexdigits long that uniquely refers the object. The default value is 7 and can be overridden by the core.abbrev config option. --no-abbrev Display the full sha1s in the output listing rather than abbreviating them. -t, --track[=(direct|inherit)] When creating a new branch, set up branch.<name>.remote and branch.<name>.merge configuration entries to set "upstream" tracking configuration for the new branch. This configuration will tell git to show the relationship between the two branches in git status and git branch -v. Furthermore, it directs git pull without arguments to pull from the upstream when the new branch is checked out. The exact upstream branch is chosen depending on the optional argument: -t, --track, or --track=direct means to use the start-point branch itself as the upstream; --track=inherit means to copy the upstream configuration of the start-point branch. The branch.autoSetupMerge configuration variable specifies how git switch, git checkout and git branch should behave when neither --track nor --no-track are specified: The default option, true, behaves as though --track=direct were given whenever the start-point is a remote-tracking branch. false behaves as if --no-track were given. always behaves as though --track=direct were given. inherit behaves as though --track=inherit were given. simple behaves as though --track=direct were given only when the start-point is a remote-tracking branch and the new branch has the same name as the remote branch. See git-pull(1) and git-config(1) for additional discussion on how the branch.<name>.remote and branch.<name>.merge options are used. --no-track Do not set up "upstream" configuration, even if the branch.autoSetupMerge configuration variable is set. --recurse-submodules THIS OPTION IS EXPERIMENTAL! Causes the current command to recurse into submodules if submodule.propagateBranches is enabled. See submodule.propagateBranches in git-config(1). Currently, only branch creation is supported. When used in branch creation, a new branch <branchname> will be created in the superproject and all of the submodules in the superproject’s <start-point>. In submodules, the branch will point to the submodule commit in the superproject’s <start-point> but the branch’s tracking information will be set up based on the submodule’s branches and remotes e.g. git branch --recurse-submodules topic origin/main will create the submodule branch "topic" that points to the submodule commit in the superproject’s "origin/main", but tracks the submodule’s "origin/main". --set-upstream As this option had confusing syntax, it is no longer supported. Please use --track or --set-upstream-to instead. -u <upstream>, --set-upstream-to=<upstream> Set up <branchname>'s tracking information so <upstream> is considered <branchname>'s upstream branch. If no <branchname> is specified, then it defaults to the current branch. --unset-upstream Remove the upstream information for <branchname>. If no branch is specified it defaults to the current branch. --edit-description Open an editor and edit the text to explain what the branch is for, to be used by various other commands (e.g. format-patch, request-pull, and merge (if enabled)). Multi-line explanations may be used. --contains [<commit>] Only list branches which contain the specified commit (HEAD if not specified). Implies --list. --no-contains [<commit>] Only list branches which don’t contain the specified commit (HEAD if not specified). Implies --list. --merged [<commit>] Only list branches whose tips are reachable from the specified commit (HEAD if not specified). Implies --list. --no-merged [<commit>] Only list branches whose tips are not reachable from the specified commit (HEAD if not specified). Implies --list. <branchname> The name of the branch to create or delete. The new branch name must pass all checks defined by git-check-ref-format(1). Some of these checks may restrict the characters allowed in a branch name. <start-point> The new branch head will point to this commit. It may be given as a branch name, a commit-id, or a tag. If this option is omitted, the current HEAD will be used instead. <oldbranch> The name of an existing branch to rename. <newbranch> The new name for an existing branch. The same restrictions as for <branchname> apply. --sort=<key> Sort based on the key given. Prefix - to sort in descending order of the value. You may use the --sort=<key> option multiple times, in which case the last key becomes the primary key. The keys supported are the same as those in git for-each-ref. Sort order defaults to the value configured for the branch.sort variable if exists, or to sorting based on the full refname (including refs/... prefix). This lists detached HEAD (if present) first, then local branches and finally remote-tracking branches. See git-config(1). --points-at <object> Only list branches of the given object. --format <format> A string that interpolates %(fieldname) from a branch ref being shown and the object it points at. The format is the same as that of git-for-each-ref(1).
# git branch > Main Git command for working with branches. More information: https://git- > scm.com/docs/git-branch. * List all branches (local and remote; the current branch is highlighted by `*`): `git branch --all` * List which branches include a specific Git commit in their history: `git branch --all --contains {{commit_hash}}` * Show the name of the current branch: `git branch --show-current` * Create new branch based on the current commit: `git branch {{branch_name}}` * Create new branch based on a specific commit: `git branch {{branch_name}} {{commit_hash}}` * Rename a branch (must not have it checked out to do this): `git branch -m {{old_branch_name}} {{new_branch_name}}` * Delete a local branch (must not have it checked out to do this): `git branch -d {{branch_name}}` * Delete a remote branch: `git push {{remote_name}} --delete {{remote_branch_name}}`
base64
Base64 encode or decode FILE, or standard input, to standard output. With no FILE, or when FILE is -, read standard input. Mandatory arguments to long options are mandatory for short options too. -d, --decode decode data -i, --ignore-garbage when decoding, ignore non-alphabet characters -w, --wrap=COLS wrap encoded lines after COLS character (default 76). Use 0 to disable line wrapping --help display this help and exit --version output version information and exit The data are encoded as described for the base64 alphabet in RFC 4648. When decoding, the input may contain newlines in addition to the bytes of the formal base64 alphabet. Use --ignore-garbage to attempt to recover from any other non-alphabet bytes in the encoded stream.
# base64 > Encode and decode using Base64 representation. More information: > https://www.unix.com/man-page/osx/1/base64/. * Encode a file: `base64 --input={{plain_file}}` * Decode a file: `base64 --decode --input={{base64_file}}` * Encode from `stdin`: `echo -n "{{plain_text}}" | base64` * Decode from `stdin`: `echo -n {{base64_text}} | base64 --decode`
ipcs
ipcs shows information on System V inter-process communication facilities. By default it shows information about all three resources: shared memory segments, message queues, and semaphore arrays. -i, --id id Show full details on just the one resource element identified by id. This option needs to be combined with one of the three resource options: -m, -q or -s. -h, --help Display help text and exit. -V, --version Print version and exit. Resource options -m, --shmems Write information about active shared memory segments. -q, --queues Write information about active message queues. -s, --semaphores Write information about active semaphore sets. -a, --all Write information about all three resources (default). Output formats Of these options only one takes effect: the last one specified. -c, --creator Show creator and owner. -l, --limits Show resource limits. -p, --pid Show PIDs of creator and last operator. -t, --time Write time information. The time of the last control operation that changed the access permissions for all facilities, the time of the last msgsnd(2) and msgrcv(2) operations on message queues, the time of the last shmat(2) and shmdt(2) operations on shared memory, and the time of the last semop(2) operation on semaphores. -u, --summary Show status summary. Representation These affect only the -l (--limits) option. -b, --bytes Print the sizes in bytes rather than in a human-readable format. By default, the unit, sizes are expressed in, is byte, and unit prefixes are in power of 2^10 (1024). Abbreviations of symbols are exhibited truncated in order to reach a better readability, by exhibiting alone the first letter of them; examples: "1 KiB" and "1 MiB" are respectively exhibited as "1 K" and "1 M", then omitting on purpose the mention "iB", which is part of these abbreviations. --human Print sizes in human-readable format.
# ipcs > Display information about resources used in IPC (Inter-process > Communication). More information: https://manned.org/ipcs. * Specific information about the Message Queue which has the ID 32768: `ipcs -qi 32768` * General information about all the IPC: `ipcs -a`
type
The type utility shall indicate how each argument would be interpreted if used as a command name. None.
# type > Display the type of command the shell will execute. More information: > https://manned.org/type. * Display the type of a command: `type {{command}}` * Display all locations containing the specified executable: `type -a {{command}}` * Display the name of the disk file that would be executed: `type -p {{command}}`
ul
ul reads the named files (or standard input if none are given) and translates occurrences of underscores to the sequence which indicates underlining for the terminal in use, as specified by the environment variable TERM. The terminfo database is read to determine the appropriate sequences for underlining. If the terminal is incapable of underlining but is capable of a standout mode, then that is used instead. If the terminal can overstrike, or handles underlining automatically, ul degenerates to cat(1). If the terminal cannot underline, underlining is ignored. -i, --indicated Underlining is indicated by a separate line containing appropriate dashes `-'; this is useful when you want to look at the underlining which is present in an nroff output stream on a crt-terminal. -t, -T, --terminal terminal Override the environment variable TERM with the specified terminal type. -h, --help Display help text and exit. -V, --version Print version and exit.
# ul > Performs the underlining of a text. Each character in a given string must be > underlined separately. More information: https://manned.org/ul. * Display the contents of the file with underlines where applicable: `ul {{file.txt}}` * Display the contents of the file with underlines made of dashes `-`: `ul -i {{file.txt}}`
ldd
ldd prints the shared objects (shared libraries) required by each program or shared object specified on the command line. An example of its use and output is the following: $ ldd /bin/ls linux-vdso.so.1 (0x00007ffcc3563000) libselinux.so.1 => /lib64/libselinux.so.1 (0x00007f87e5459000) libcap.so.2 => /lib64/libcap.so.2 (0x00007f87e5254000) libc.so.6 => /lib64/libc.so.6 (0x00007f87e4e92000) libpcre.so.1 => /lib64/libpcre.so.1 (0x00007f87e4c22000) libdl.so.2 => /lib64/libdl.so.2 (0x00007f87e4a1e000) /lib64/ld-linux-x86-64.so.2 (0x00005574bf12e000) libattr.so.1 => /lib64/libattr.so.1 (0x00007f87e4817000) libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f87e45fa000) In the usual case, ldd invokes the standard dynamic linker (see ld.so(8)) with the LD_TRACE_LOADED_OBJECTS environment variable set to 1. This causes the dynamic linker to inspect the program's dynamic dependencies, and find (according to the rules described in ld.so(8)) and load the objects that satisfy those dependencies. For each dependency, ldd displays the location of the matching object and the (hexadecimal) address at which it is loaded. (The linux-vdso and ld-linux shared dependencies are special; see vdso(7) and ld.so(8).) Security Be aware that in some circumstances (e.g., where the program specifies an ELF interpreter other than ld-linux.so), some versions of ldd may attempt to obtain the dependency information by attempting to directly execute the program, which may lead to the execution of whatever code is defined in the program's ELF interpreter, and perhaps to execution of the program itself. (Before glibc 2.27, the upstream ldd implementation did this for example, although most distributions provided a modified version that did not.) Thus, you should never employ ldd on an untrusted executable, since this may result in the execution of arbitrary code. A safer alternative when dealing with untrusted executables is: $ objdump -p /path/to/program | grep NEEDED Note, however, that this alternative shows only the direct dependencies of the executable, while ldd shows the entire dependency tree of the executable. --version Print the version number of ldd. -v, --verbose Print all information, including, for example, symbol versioning information. -u, --unused Print unused direct dependencies. (Since glibc 2.3.4.) -d, --data-relocs Perform relocations and report any missing objects (ELF only). -r, --function-relocs Perform relocations for both data objects and functions, and report any missing objects or functions (ELF only). --help Usage information.
# ldd > Display shared library dependencies of a binary. Do not use on an untrusted > binary, use objdump for that instead. More information: > https://manned.org/ldd. * Display shared library dependencies of a binary: `ldd {{path/to/binary}}` * Display all information about dependencies: `ldd --verbose {{path/to/binary}}` * Display unused direct dependencies: `ldd --unused {{path/to/binary}}` * Report missing data objects and perform data relocations: `ldd --data-relocs {{path/to/binary}}` * Report missing data objects and functions, and perform relocations for both: `ldd --function-relocs {{path/to/binary}}`
git-gc
Runs a number of housekeeping tasks within the current repository, such as compressing file revisions (to reduce disk space and increase performance), removing unreachable objects which may have been created from prior invocations of git add, packing refs, pruning reflog, rerere metadata or stale working trees. May also update ancillary indexes such as the commit-graph. When common porcelain operations that create objects are run, they will check whether the repository has grown substantially since the last maintenance, and if so run git gc automatically. See gc.auto below for how to disable this behavior. Running git gc manually should only be needed when adding objects to a repository without regularly running such porcelain commands, to do a one-off repository optimization, or e.g. to clean up a suboptimal mass-import. See the "PACKFILE OPTIMIZATION" section in git-fast-import(1) for more details on the import case. --aggressive Usually git gc runs very quickly while providing good disk space utilization and performance. This option will cause git gc to more aggressively optimize the repository at the expense of taking much more time. The effects of this optimization are mostly persistent. See the "AGGRESSIVE" section below for details. --auto With this option, git gc checks whether any housekeeping is required; if not, it exits without performing any work. See the gc.auto option in the "CONFIGURATION" section below for how this heuristic works. Once housekeeping is triggered by exceeding the limits of configuration options such as gc.auto and gc.autoPackLimit, all other housekeeping tasks (e.g. rerere, working trees, reflog...) will be performed as well. --[no-]cruft When expiring unreachable objects, pack them separately into a cruft pack instead of storing them as loose objects. --cruft is on by default. --prune=<date> Prune loose objects older than date (default is 2 weeks ago, overridable by the config variable gc.pruneExpire). --prune=now prunes loose objects regardless of their age and increases the risk of corruption if another process is writing to the repository concurrently; see "NOTES" below. --prune is on by default. --no-prune Do not prune any loose objects. --quiet Suppress all progress reports. --force Force git gc to run even if there may be another git gc instance running on this repository. --keep-largest-pack All packs except the largest non-cruft pack, any packs marked with a .keep file, and any cruft pack(s) are consolidated into a single pack. When this option is used, gc.bigPackThreshold is ignored.
# git gc > Optimise the local repository by cleaning unnecessary files. More > information: https://git-scm.com/docs/git-gc. * Optimise the repository: `git gc` * Aggressively optimise, takes more time: `git gc --aggressive` * Do not prune loose objects (prunes by default): `git gc --no-prune` * Suppress all output: `git gc --quiet` * View full usage: `git gc --help`
git-diff
Show changes between the working tree and the index or a tree, changes between the index and a tree, changes between two trees, changes resulting from a merge, changes between two blob objects, or changes between two files on disk. git diff [<options>] [--] [<path>...] This form is to view the changes you made relative to the index (staging area for the next commit). In other words, the differences are what you could tell Git to further add to the index but you still haven’t. You can stage these changes by using git-add(1). git diff [<options>] --no-index [--] <path> <path> This form is to compare the given two paths on the filesystem. You can omit the --no-index option when running the command in a working tree controlled by Git and at least one of the paths points outside the working tree, or when running the command outside a working tree controlled by Git. This form implies --exit-code. git diff [<options>] --cached [--merge-base] [<commit>] [--] [<path>...] This form is to view the changes you staged for the next commit relative to the named <commit>. Typically you would want comparison with the latest commit, so if you do not give <commit>, it defaults to HEAD. If HEAD does not exist (e.g. unborn branches) and <commit> is not given, it shows all staged changes. --staged is a synonym of --cached. If --merge-base is given, instead of using <commit>, use the merge base of <commit> and HEAD. git diff --cached --merge-base A is equivalent to git diff --cached $(git merge-base A HEAD). git diff [<options>] [--merge-base] <commit> [--] [<path>...] This form is to view the changes you have in your working tree relative to the named <commit>. You can use HEAD to compare it with the latest commit, or a branch name to compare with the tip of a different branch. If --merge-base is given, instead of using <commit>, use the merge base of <commit> and HEAD. git diff --merge-base A is equivalent to git diff $(git merge-base A HEAD). git diff [<options>] [--merge-base] <commit> <commit> [--] [<path>...] This is to view the changes between two arbitrary <commit>. If --merge-base is given, use the merge base of the two commits for the "before" side. git diff --merge-base A B is equivalent to git diff $(git merge-base A B) B. git diff [<options>] <commit> <commit>... <commit> [--] [<path>...] This form is to view the results of a merge commit. The first listed <commit> must be the merge itself; the remaining two or more commits should be its parents. Convenient ways to produce the desired set of revisions are to use the suffixes ^@ and ^!. If A is a merge commit, then git diff A A^@, git diff A^! and git show A all give the same combined diff. git diff [<options>] <commit>..<commit> [--] [<path>...] This is synonymous to the earlier form (without the ..) for viewing the changes between two arbitrary <commit>. If <commit> on one side is omitted, it will have the same effect as using HEAD instead. git diff [<options>] <commit>...<commit> [--] [<path>...] This form is to view the changes on the branch containing and up to the second <commit>, starting at a common ancestor of both <commit>. git diff A...B is equivalent to git diff $(git merge-base A B) B. You can omit any one of <commit>, which has the same effect as using HEAD instead. Just in case you are doing something exotic, it should be noted that all of the <commit> in the above description, except in the --merge-base case and in the last two forms that use .. notations, can be any <tree>. A tree of interest is the one pointed to by the special ref AUTO_MERGE, which is written by the ort merge strategy upon hitting merge conflicts (see git-merge(1)). Comparing the working tree with AUTO_MERGE shows changes you’ve made so far to resolve textual conflicts (see the examples below). For a more complete list of ways to spell <commit>, see "SPECIFYING REVISIONS" section in gitrevisions(7). However, "diff" is about comparing two endpoints, not ranges, and the range notations (<commit>..<commit> and <commit>...<commit>) do not mean a range as defined in the "SPECIFYING RANGES" section in gitrevisions(7). git diff [<options>] <blob> <blob> This form is to view the differences between the raw contents of two blob objects. -p, -u, --patch Generate patch (see section titled "Generating patch text with -p"). This is the default. -s, --no-patch Suppress all output from the diff machinery. Useful for commands like git show that show the patch by default to squelch their output, or to cancel the effect of options like --patch, --stat earlier on the command line in an alias. -U<n>, --unified=<n> Generate diffs with <n> lines of context instead of the usual three. Implies --patch. --output=<file> Output to a specific file instead of stdout. --output-indicator-new=<char>, --output-indicator-old=<char>, --output-indicator-context=<char> Specify the character used to indicate new, old or context lines in the generated patch. Normally they are +, - and ' ' respectively. --raw Generate the diff in raw format. --patch-with-raw Synonym for -p --raw. --indent-heuristic Enable the heuristic that shifts diff hunk boundaries to make patches easier to read. This is the default. --no-indent-heuristic Disable the indent heuristic. --minimal Spend extra time to make sure the smallest possible diff is produced. --patience Generate a diff using the "patience diff" algorithm. --histogram Generate a diff using the "histogram diff" algorithm. --anchored=<text> Generate a diff using the "anchored diff" algorithm. This option may be specified more than once. If a line exists in both the source and destination, exists only once, and starts with this text, this algorithm attempts to prevent it from appearing as a deletion or addition in the output. It uses the "patience diff" algorithm internally. --diff-algorithm={patience|minimal|histogram|myers} Choose a diff algorithm. The variants are as follows: default, myers The basic greedy diff algorithm. Currently, this is the default. minimal Spend extra time to make sure the smallest possible diff is produced. patience Use "patience diff" algorithm when generating patches. histogram This algorithm extends the patience algorithm to "support low-occurrence common elements". For instance, if you configured the diff.algorithm variable to a non-default value and want to use the default one, then you have to use --diff-algorithm=default option. --stat[=<width>[,<name-width>[,<count>]]] Generate a diffstat. By default, as much space as necessary will be used for the filename part, and the rest for the graph part. Maximum width defaults to terminal width, or 80 columns if not connected to a terminal, and can be overridden by <width>. The width of the filename part can be limited by giving another width <name-width> after a comma. The width of the graph part can be limited by using --stat-graph-width=<width> (affects all commands generating a stat graph) or by setting diff.statGraphWidth=<width> (does not affect git format-patch). By giving a third parameter <count>, you can limit the output to the first <count> lines, followed by ... if there are more. These parameters can also be set individually with --stat-width=<width>, --stat-name-width=<name-width> and --stat-count=<count>. --compact-summary Output a condensed summary of extended header information such as file creations or deletions ("new" or "gone", optionally "+l" if it’s a symlink) and mode changes ("+x" or "-x" for adding or removing executable bit respectively) in diffstat. The information is put between the filename part and the graph part. Implies --stat. --numstat Similar to --stat, but shows number of added and deleted lines in decimal notation and pathname without abbreviation, to make it more machine friendly. For binary files, outputs two - instead of saying 0 0. --shortstat Output only the last line of the --stat format containing total number of modified files, as well as number of added and deleted lines. -X[<param1,param2,...>], --dirstat[=<param1,param2,...>] Output the distribution of relative amount of changes for each sub-directory. The behavior of --dirstat can be customized by passing it a comma separated list of parameters. The defaults are controlled by the diff.dirstat configuration variable (see git-config(1)). The following parameters are available: changes Compute the dirstat numbers by counting the lines that have been removed from the source, or added to the destination. This ignores the amount of pure code movements within a file. In other words, rearranging lines in a file is not counted as much as other changes. This is the default behavior when no parameter is given. lines Compute the dirstat numbers by doing the regular line-based diff analysis, and summing the removed/added line counts. (For binary files, count 64-byte chunks instead, since binary files have no natural concept of lines). This is a more expensive --dirstat behavior than the changes behavior, but it does count rearranged lines within a file as much as other changes. The resulting output is consistent with what you get from the other --*stat options. files Compute the dirstat numbers by counting the number of files changed. Each changed file counts equally in the dirstat analysis. This is the computationally cheapest --dirstat behavior, since it does not have to look at the file contents at all. cumulative Count changes in a child directory for the parent directory as well. Note that when using cumulative, the sum of the percentages reported may exceed 100%. The default (non-cumulative) behavior can be specified with the noncumulative parameter. <limit> An integer parameter specifies a cut-off percent (3% by default). Directories contributing less than this percentage of the changes are not shown in the output. Example: The following will count changed files, while ignoring directories with less than 10% of the total amount of changed files, and accumulating child directory counts in the parent directories: --dirstat=files,10,cumulative. --cumulative Synonym for --dirstat=cumulative --dirstat-by-file[=<param1,param2>...] Synonym for --dirstat=files,param1,param2... --summary Output a condensed summary of extended header information such as creations, renames and mode changes. --patch-with-stat Synonym for -p --stat. -z When --raw, --numstat, --name-only or --name-status has been given, do not munge pathnames and use NULs as output field terminators. Without this option, pathnames with "unusual" characters are quoted as explained for the configuration variable core.quotePath (see git-config(1)). --name-only Show only names of changed files. The file names are often encoded in UTF-8. For more information see the discussion about encoding in the git-log(1) manual page. --name-status Show only names and status of changed files. See the description of the --diff-filter option on what the status letters mean. Just like --name-only the file names are often encoded in UTF-8. --submodule[=<format>] Specify how differences in submodules are shown. When specifying --submodule=short the short format is used. This format just shows the names of the commits at the beginning and end of the range. When --submodule or --submodule=log is specified, the log format is used. This format lists the commits in the range like git-submodule(1) summary does. When --submodule=diff is specified, the diff format is used. This format shows an inline diff of the changes in the submodule contents between the commit range. Defaults to diff.submodule or the short format if the config option is unset. --color[=<when>] Show colored diff. --color (i.e. without =<when>) is the same as --color=always. <when> can be one of always, never, or auto. It can be changed by the color.ui and color.diff configuration settings. --no-color Turn off colored diff. This can be used to override configuration settings. It is the same as --color=never. --color-moved[=<mode>] Moved lines of code are colored differently. It can be changed by the diff.colorMoved configuration setting. The <mode> defaults to no if the option is not given and to zebra if the option with no mode is given. The mode must be one of: no Moved lines are not highlighted. default Is a synonym for zebra. This may change to a more sensible mode in the future. plain Any line that is added in one location and was removed in another location will be colored with color.diff.newMoved. Similarly color.diff.oldMoved will be used for removed lines that are added somewhere else in the diff. This mode picks up any moved line, but it is not very useful in a review to determine if a block of code was moved without permutation. blocks Blocks of moved text of at least 20 alphanumeric characters are detected greedily. The detected blocks are painted using either the color.diff.{old,new}Moved color. Adjacent blocks cannot be told apart. zebra Blocks of moved text are detected as in blocks mode. The blocks are painted using either the color.diff.{old,new}Moved color or color.diff.{old,new}MovedAlternative. The change between the two colors indicates that a new block was detected. dimmed-zebra Similar to zebra, but additional dimming of uninteresting parts of moved code is performed. The bordering lines of two adjacent blocks are considered interesting, the rest is uninteresting. dimmed_zebra is a deprecated synonym. --no-color-moved Turn off move detection. This can be used to override configuration settings. It is the same as --color-moved=no. --color-moved-ws=<modes> This configures how whitespace is ignored when performing the move detection for --color-moved. It can be set by the diff.colorMovedWS configuration setting. These modes can be given as a comma separated list: no Do not ignore whitespace when performing move detection. ignore-space-at-eol Ignore changes in whitespace at EOL. ignore-space-change Ignore changes in amount of whitespace. This ignores whitespace at line end, and considers all other sequences of one or more whitespace characters to be equivalent. ignore-all-space Ignore whitespace when comparing lines. This ignores differences even if one line has whitespace where the other line has none. allow-indentation-change Initially ignore any whitespace in the move detection, then group the moved code blocks only into a block if the change in whitespace is the same per line. This is incompatible with the other modes. --no-color-moved-ws Do not ignore whitespace when performing move detection. This can be used to override configuration settings. It is the same as --color-moved-ws=no. --word-diff[=<mode>] Show a word diff, using the <mode> to delimit changed words. By default, words are delimited by whitespace; see --word-diff-regex below. The <mode> defaults to plain, and must be one of: color Highlight changed words using only colors. Implies --color. plain Show words as [-removed-] and {+added+}. Makes no attempts to escape the delimiters if they appear in the input, so the output may be ambiguous. porcelain Use a special line-based format intended for script consumption. Added/removed/unchanged runs are printed in the usual unified diff format, starting with a +/-/` ` character at the beginning of the line and extending to the end of the line. Newlines in the input are represented by a tilde ~ on a line of its own. none Disable word diff again. Note that despite the name of the first mode, color is used to highlight the changed parts in all modes if enabled. --word-diff-regex=<regex> Use <regex> to decide what a word is, instead of considering runs of non-whitespace to be a word. Also implies --word-diff unless it was already enabled. Every non-overlapping match of the <regex> is considered a word. Anything between these matches is considered whitespace and ignored(!) for the purposes of finding differences. You may want to append |[^[:space:]] to your regular expression to make sure that it matches all non-whitespace characters. A match that contains a newline is silently truncated(!) at the newline. For example, --word-diff-regex=. will treat each character as a word and, correspondingly, show differences character by character. The regex can also be set via a diff driver or configuration option, see gitattributes(5) or git-config(1). Giving it explicitly overrides any diff driver or configuration setting. Diff drivers override configuration settings. --color-words[=<regex>] Equivalent to --word-diff=color plus (if a regex was specified) --word-diff-regex=<regex>. --no-renames Turn off rename detection, even when the configuration file gives the default to do so. --[no-]rename-empty Whether to use empty blobs as rename source. --check Warn if changes introduce conflict markers or whitespace errors. What are considered whitespace errors is controlled by core.whitespace configuration. By default, trailing whitespaces (including lines that consist solely of whitespaces) and a space character that is immediately followed by a tab character inside the initial indent of the line are considered whitespace errors. Exits with non-zero status if problems are found. Not compatible with --exit-code. --ws-error-highlight=<kind> Highlight whitespace errors in the context, old or new lines of the diff. Multiple values are separated by comma, none resets previous values, default reset the list to new and all is a shorthand for old,new,context. When this option is not given, and the configuration variable diff.wsErrorHighlight is not set, only whitespace errors in new lines are highlighted. The whitespace errors are colored with color.diff.whitespace. --full-index Instead of the first handful of characters, show the full pre- and post-image blob object names on the "index" line when generating patch format output. --binary In addition to --full-index, output a binary diff that can be applied with git-apply. Implies --patch. --abbrev[=<n>] Instead of showing the full 40-byte hexadecimal object name in diff-raw format output and diff-tree header lines, show the shortest prefix that is at least <n> hexdigits long that uniquely refers the object. In diff-patch output format, --full-index takes higher precedence, i.e. if --full-index is specified, full blob names will be shown regardless of --abbrev. Non default number of digits can be specified with --abbrev=<n>. -B[<n>][/<m>], --break-rewrites[=[<n>][/<m>]] Break complete rewrite changes into pairs of delete and create. This serves two purposes: It affects the way a change that amounts to a total rewrite of a file not as a series of deletion and insertion mixed together with a very few lines that happen to match textually as the context, but as a single deletion of everything old followed by a single insertion of everything new, and the number m controls this aspect of the -B option (defaults to 60%). -B/70% specifies that less than 30% of the original should remain in the result for Git to consider it a total rewrite (i.e. otherwise the resulting patch will be a series of deletion and insertion mixed together with context lines). When used with -M, a totally-rewritten file is also considered as the source of a rename (usually -M only considers a file that disappeared as the source of a rename), and the number n controls this aspect of the -B option (defaults to 50%). -B20% specifies that a change with addition and deletion compared to 20% or more of the file’s size are eligible for being picked up as a possible source of a rename to another file. -M[<n>], --find-renames[=<n>] Detect renames. If n is specified, it is a threshold on the similarity index (i.e. amount of addition/deletions compared to the file’s size). For example, -M90% means Git should consider a delete/add pair to be a rename if more than 90% of the file hasn’t changed. Without a % sign, the number is to be read as a fraction, with a decimal point before it. I.e., -M5 becomes 0.5, and is thus the same as -M50%. Similarly, -M05 is the same as -M5%. To limit detection to exact renames, use -M100%. The default similarity index is 50%. -C[<n>], --find-copies[=<n>] Detect copies as well as renames. See also --find-copies-harder. If n is specified, it has the same meaning as for -M<n>. --find-copies-harder For performance reasons, by default, -C option finds copies only if the original file of the copy was modified in the same changeset. This flag makes the command inspect unmodified files as candidates for the source of copy. This is a very expensive operation for large projects, so use it with caution. Giving more than one -C option has the same effect. -D, --irreversible-delete Omit the preimage for deletes, i.e. print only the header but not the diff between the preimage and /dev/null. The resulting patch is not meant to be applied with patch or git apply; this is solely for people who want to just concentrate on reviewing the text after the change. In addition, the output obviously lacks enough information to apply such a patch in reverse, even manually, hence the name of the option. When used together with -B, omit also the preimage in the deletion part of a delete/create pair. -l<num> The -M and -C options involve some preliminary steps that can detect subsets of renames/copies cheaply, followed by an exhaustive fallback portion that compares all remaining unpaired destinations to all relevant sources. (For renames, only remaining unpaired sources are relevant; for copies, all original sources are relevant.) For N sources and destinations, this exhaustive check is O(N^2). This option prevents the exhaustive portion of rename/copy detection from running if the number of source/destination files involved exceeds the specified number. Defaults to diff.renameLimit. Note that a value of 0 is treated as unlimited. --diff-filter=[(A|C|D|M|R|T|U|X|B)...[*]] Select only files that are Added (A), Copied (C), Deleted (D), Modified (M), Renamed (R), have their type (i.e. regular file, symlink, submodule, ...) changed (T), are Unmerged (U), are Unknown (X), or have had their pairing Broken (B). Any combination of the filter characters (including none) can be used. When * (All-or-none) is added to the combination, all paths are selected if there is any file that matches other criteria in the comparison; if there is no file that matches other criteria, nothing is selected. Also, these upper-case letters can be downcased to exclude. E.g. --diff-filter=ad excludes added and deleted paths. Note that not all diffs can feature all types. For instance, copied and renamed entries cannot appear if detection for those types is disabled. -S<string> Look for differences that change the number of occurrences of the specified string (i.e. addition/deletion) in a file. Intended for the scripter’s use. It is useful when you’re looking for an exact block of code (like a struct), and want to know the history of that block since it first came into being: use the feature iteratively to feed the interesting block in the preimage back into -S, and keep going until you get the very first version of the block. Binary files are searched as well. -G<regex> Look for differences whose patch text contains added/removed lines that match <regex>. To illustrate the difference between -S<regex> --pickaxe-regex and -G<regex>, consider a commit with the following diff in the same file: + return frotz(nitfol, two->ptr, 1, 0); ... - hit = frotz(nitfol, mf2.ptr, 1, 0); While git log -G"frotz\(nitfol" will show this commit, git log -S"frotz\(nitfol" --pickaxe-regex will not (because the number of occurrences of that string did not change). Unless --text is supplied patches of binary files without a textconv filter will be ignored. See the pickaxe entry in gitdiffcore(7) for more information. --find-object=<object-id> Look for differences that change the number of occurrences of the specified object. Similar to -S, just the argument is different in that it doesn’t search for a specific string but for a specific object id. The object can be a blob or a submodule commit. It implies the -t option in git-log to also find trees. --pickaxe-all When -S or -G finds a change, show all the changes in that changeset, not just the files that contain the change in <string>. --pickaxe-regex Treat the <string> given to -S as an extended POSIX regular expression to match. -O<orderfile> Control the order in which files appear in the output. This overrides the diff.orderFile configuration variable (see git-config(1)). To cancel diff.orderFile, use -O/dev/null. The output order is determined by the order of glob patterns in <orderfile>. All files with pathnames that match the first pattern are output first, all files with pathnames that match the second pattern (but not the first) are output next, and so on. All files with pathnames that do not match any pattern are output last, as if there was an implicit match-all pattern at the end of the file. If multiple pathnames have the same rank (they match the same pattern but no earlier patterns), their output order relative to each other is the normal order. <orderfile> is parsed as follows: • Blank lines are ignored, so they can be used as separators for readability. • Lines starting with a hash ("#") are ignored, so they can be used for comments. Add a backslash ("\") to the beginning of the pattern if it starts with a hash. • Each other line contains a single pattern. Patterns have the same syntax and semantics as patterns used for fnmatch(3) without the FNM_PATHNAME flag, except a pathname also matches a pattern if removing any number of the final pathname components matches the pattern. For example, the pattern "foo*bar" matches "fooasdfbar" and "foo/bar/baz/asdf" but not "foobarx". --skip-to=<file>, --rotate-to=<file> Discard the files before the named <file> from the output (i.e. skip to), or move them to the end of the output (i.e. rotate to). These were invented primarily for use of the git difftool command, and may not be very useful otherwise. -R Swap two inputs; that is, show differences from index or on-disk file to tree contents. --relative[=<path>], --no-relative When run from a subdirectory of the project, it can be told to exclude changes outside the directory and show pathnames relative to it with this option. When you are not in a subdirectory (e.g. in a bare repository), you can name which subdirectory to make the output relative to by giving a <path> as an argument. --no-relative can be used to countermand both diff.relative config option and previous --relative. -a, --text Treat all files as text. --ignore-cr-at-eol Ignore carriage-return at the end of line when doing a comparison. --ignore-space-at-eol Ignore changes in whitespace at EOL. -b, --ignore-space-change Ignore changes in amount of whitespace. This ignores whitespace at line end, and considers all other sequences of one or more whitespace characters to be equivalent. -w, --ignore-all-space Ignore whitespace when comparing lines. This ignores differences even if one line has whitespace where the other line has none. --ignore-blank-lines Ignore changes whose lines are all blank. -I<regex>, --ignore-matching-lines=<regex> Ignore changes whose all lines match <regex>. This option may be specified more than once. --inter-hunk-context=<lines> Show the context between diff hunks, up to the specified number of lines, thereby fusing hunks that are close to each other. Defaults to diff.interHunkContext or 0 if the config option is unset. -W, --function-context Show whole function as context lines for each change. The function names are determined in the same way as git diff works out patch hunk headers (see Defining a custom hunk-header in gitattributes(5)). --exit-code Make the program exit with codes similar to diff(1). That is, it exits with 1 if there were differences and 0 means no differences. --quiet Disable all output of the program. Implies --exit-code. --ext-diff Allow an external diff helper to be executed. If you set an external diff driver with gitattributes(5), you need to use this option with git-log(1) and friends. --no-ext-diff Disallow external diff drivers. --textconv, --no-textconv Allow (or disallow) external text conversion filters to be run when comparing binary files. See gitattributes(5) for details. Because textconv filters are typically a one-way conversion, the resulting diff is suitable for human consumption, but cannot be applied. For this reason, textconv filters are enabled by default only for git-diff(1) and git-log(1), but not for git-format-patch(1) or diff plumbing commands. --ignore-submodules[=<when>] Ignore changes to submodules in the diff generation. <when> can be either "none", "untracked", "dirty" or "all", which is the default. Using "none" will consider the submodule modified when it either contains untracked or modified files or its HEAD differs from the commit recorded in the superproject and can be used to override any settings of the ignore option in git-config(1) or gitmodules(5). When "untracked" is used submodules are not considered dirty when they only contain untracked content (but they are still scanned for modified content). Using "dirty" ignores all changes to the work tree of submodules, only changes to the commits stored in the superproject are shown (this was the behavior until 1.7.0). Using "all" hides all changes to submodules. --src-prefix=<prefix> Show the given source prefix instead of "a/". --dst-prefix=<prefix> Show the given destination prefix instead of "b/". --no-prefix Do not show any source or destination prefix. --default-prefix Use the default source and destination prefixes ("a/" and "b/"). This is usually the default already, but may be used to override config such as diff.noprefix. --line-prefix=<prefix> Prepend an additional prefix to every line of output. --ita-invisible-in-index By default entries added by "git add -N" appear as an existing empty file in "git diff" and a new file in "git diff --cached". This option makes the entry appear as a new file in "git diff" and non-existent in "git diff --cached". This option could be reverted with --ita-visible-in-index. Both options are experimental and could be removed in future. For more detailed explanation on these common options, see also gitdiffcore(7). -1 --base, -2 --ours, -3 --theirs Compare the working tree with the "base" version (stage #1), "our branch" (stage #2) or "their branch" (stage #3). The index contains these stages only for unmerged entries i.e. while resolving conflicts. See git-read-tree(1) section "3-Way Merge" for detailed information. -0 Omit diff output for unmerged entries and just show "Unmerged". Can be used only when comparing the working tree with the index. <path>... The <paths> parameters, when given, are used to limit the diff to the named paths (you can give directory names and get diff for all files under them).
# git diff > Show changes to tracked files. More information: https://git- > scm.com/docs/git-diff. * Show unstaged, uncommitted changes: `git diff` * Show all uncommitted changes (including staged ones): `git diff HEAD` * Show only staged (added, but not yet committed) changes: `git diff --staged` * Show changes from all commits since a given date/time (a date expression, e.g. "1 week 2 days" or an ISO date): `git diff 'HEAD@{3 months|weeks|days|hours|seconds ago}'` * Show only names of changed files since a given commit: `git diff --name-only {{commit}}` * Output a summary of file creations, renames and mode changes since a given commit: `git diff --summary {{commit}}` * Compare a single file between two branches or commits: `git diff {{branch_1}}..{{branch_2}} [--] {{path/to/file}}` * Compare different files from the current branch to other branch: `git diff {{branch}}:{{path/to/file2}} {{path/to/file}}`
unexpand
Convert blanks in each FILE to tabs, writing to standard output. With no FILE, or when FILE is -, read standard input. Mandatory arguments to long options are mandatory for short options too. -a, --all convert all blanks, instead of just initial blanks --first-only convert only leading sequences of blanks (overrides -a) -t, --tabs=N have tabs N characters apart instead of 8 (enables -a) -t, --tabs=LIST use comma separated list of tab positions. The last specified position can be prefixed with '/' to specify a tab size to use after the last explicitly specified tab stop. Also a prefix of '+' can be used to align remaining tab stops relative to the last specified tab stop instead of the first column --help display this help and exit --version output version information and exit
# unexpand > Convert spaces to tabs. More information: > https://www.gnu.org/software/coreutils/unexpand. * Convert blanks in each file to tabs, writing to `stdout`: `unexpand {{path/to/file}}` * Convert blanks to tabs, reading from `stdout`: `unexpand` * Convert all blanks, instead of just initial blanks: `unexpand -a {{path/to/file}}` * Convert only leading sequences of blanks (overrides -a): `unexpand --first-only {{path/to/file}}` * Have tabs a certain number of characters apart, not 8 (enables -a): `unexpand -t {{number}} {{path/to/file}}`
unlink
The unlink utility shall perform the function call: unlink(file); A user may need appropriate privileges to invoke the unlink utility. None.
# unlink > Remove a link to a file from the filesystem. The file contents is lost if > the link is the last one to the file. More information: > https://www.gnu.org/software/coreutils/unlink. * Remove the specified file if it is the last link: `unlink {{path/to/file}}`
ls
For each operand that names a file of a type other than directory or symbolic link to a directory, ls shall write the name of the file as well as any requested, associated information. For each operand that names a file of type directory, ls shall write the names of files contained within the directory as well as any requested, associated information. Filenames beginning with a <period> ('.') and any associated information shall not be written out unless explicitly referenced, the -A or -a option is supplied, or an implementation-defined condition causes them to be written. If one or more of the -d, -F, or -l options are specified, and neither the -H nor the -L option is specified, for each operand that names a file of type symbolic link to a directory, ls shall write the name of the file as well as any requested, associated information. If none of the -d, -F, or -l options are specified, or the -H or -L options are specified, for each operand that names a file of type symbolic link to a directory, ls shall write the names of files contained within the directory as well as any requested, associated information. In each case where the names of files contained within a directory are written, if the directory contains any symbolic links then ls shall evaluate the file information and file type to be those of the symbolic link itself, unless the -L option is specified. If no operands are specified, ls shall behave as if a single operand of dot ('.') had been specified. If more than one operand is specified, ls shall write non-directory operands first; it shall sort directory and non-directory operands separately according to the collating sequence in the current locale. Whenever ls sorts filenames or pathnames according to the collating sequence in the current locale, if this collating sequence does not have a total ordering of all characters (see the Base Definitions volume of POSIX.1‐2017, Section 7.3.2, LC_COLLATE), then any filenames or pathnames that collate equally should be further compared byte-by-byte using the collating sequence for the POSIX locale. The ls utility shall detect infinite loops; that is, entering a previously visited directory that is an ancestor of the last file encountered. When it detects an infinite loop, ls shall write a diagnostic message to standard error and shall either recover its position in the hierarchy or terminate. The ls utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following options shall be supported: -A Write out all directory entries, including those whose names begin with a <period> ('.') but excluding the entries dot and dot-dot (if they exist). -C Write multi-text-column output with entries sorted down the columns, according to the collating sequence. The number of text columns and the column separator characters are unspecified, but should be adapted to the nature of the output device. This option disables long format output. -F Do not follow symbolic links named as operands unless the -H or -L options are specified. Write a <slash> ('/') immediately after each pathname that is a directory, an <asterisk> ('*') after each that is executable, a <vertical-line> ('|') after each that is a FIFO, and an at-sign ('@') after each that is a symbolic link. For other file types, other symbols may be written. -H Evaluate the file information and file type for symbolic links specified on the command line to be those of the file referenced by the link, and not the link itself; however, ls shall write the name of the link itself and not the file referenced by the link. -L Evaluate the file information and file type for all symbolic links (whether named on the command line or encountered in a file hierarchy) to be those of the file referenced by the link, and not the link itself; however, ls shall write the name of the link itself and not the file referenced by the link. When -L is used with -l, write the contents of symbolic links in the long format (see the STDOUT section). -R Recursively list subdirectories encountered. When a symbolic link to a directory is encountered, the directory shall not be recursively listed unless the -L option is specified. The use of -R with -d or -f produces unspecified results. -S Sort with the primary key being file size (in decreasing order) and the secondary key being filename in the collating sequence (in increasing order). -a Write out all directory entries, including those whose names begin with a <period> ('.'). -c Use time of last modification of the file status information (see the Base Definitions volume of POSIX.1‐2017, sys_stat.h(0p)) instead of last modification of the file itself for sorting (-t) or writing (-l). -d Do not follow symbolic links named as operands unless the -H or -L options are specified. Do not treat directories differently than other types of files. The use of -d with -R or -f produces unspecified results. -f List the entries in directory operands in the order they appear in the directory. The behavior for non- directory operands is unspecified. This option shall turn on -a. When -f is specified, any occurrences of the -r, -S, and -t options shall be ignored and any occurrences of the -A, -g, -l, -n, -o, and -s options may be ignored. The use of -f with -R or -d produces unspecified results. -g Turn on the -l (ell) option, but disable writing the file's owner name or number. Disable the -C, -m, and -x options. -i For each file, write the file's file serial number (see stat() in the System Interfaces volume of POSIX.1‐2017). -k Set the block size for the -s option and the per- directory block count written for the -l, -n, -s, -g, and -o options (see the STDOUT section) to 1024 bytes. -l (The letter ell.) Do not follow symbolic links named as operands unless the -H or -L options are specified. Write out in long format (see the STDOUT section). Disable the -C, -m, and -x options. -m Stream output format; list pathnames across the page, separated by a <comma> character followed by a <space> character. Use a <newline> character as the list terminator and after the separator sequence when there is not room on a line for the next list entry. This option disables long format output. -n Turn on the -l (ell) option, but when writing the file's owner or group, write the file's numeric UID or GID rather than the user or group name, respectively. Disable the -C, -m, and -x options. -o Turn on the -l (ell) option, but disable writing the file's group name or number. Disable the -C, -m, and -x options. -p Write a <slash> ('/') after each filename if that file is a directory. -q Force each instance of non-printable filename characters and <tab> characters to be written as the <question-mark> ('?') character. Implementations may provide this option by default if the output is to a terminal device. -r Reverse the order of the sort to get reverse collating sequence oldest first, or smallest file size first depending on the other options given. -s Indicate the total number of file system blocks consumed by each file displayed. If the -k option is also specified, the block size shall be 1024 bytes; otherwise, the block size is implementation-defined. -t Sort with the primary key being time modified (most recently modified first) and the secondary key being filename in the collating sequence. For a symbolic link, the time used as the sort key is that of the symbolic link itself, unless ls is evaluating its file information to be that of the file referenced by the link (see the -H and -L options). -u Use time of last access (see the Base Definitions volume of POSIX.1‐2017, sys_stat.h(0p)) instead of last modification of the file for sorting (-t) or writing (-l). -x The same as -C, except that the multi-text-column output is produced with entries sorted across, rather than down, the columns. This option disables long format output. -1 (The numeric digit one.) Force output to be one entry per line. This option does not disable long format output. (Long format output is enabled by -g, -l (ell), -n, and -o; and disabled by -C, -m, and -x.) If an option that enables long format output (-g, -l (ell), -n, and -o is given with an option that disables long format output (-C, -m, and -x), this shall not be considered an error. The last of these options specified shall determine whether long format output is written. If -R, -d, or -f are specified, the results of specifying these mutually-exclusive options are specified by the descriptions of these options above. If more than one of any of the other options shown in the SYNOPSIS section in mutually-exclusive sets are given, this shall not be considered an error; the last option specified in each set shall determine the output. Note that if -t is specified, -c and -u are not only mutually- exclusive with each other, they are also mutually-exclusive with -S when determining sort order. But even if -S is specified after all occurrences of -c, -t, and -u, the last use of -c or -u determines the timestamp printed when producing long format output.
# ls > List directory contents. More information: > https://www.gnu.org/software/coreutils/ls. * List files one per line: `ls -1` * List all files, including hidden files: `ls -a` * List all files, with trailing `/` added to directory names: `ls -F` * Long format list (permissions, ownership, size, and modification date) of all files: `ls -la` * Long format list with size displayed using human-readable units (KiB, MiB, GiB): `ls -lh` * Long format list sorted by size (descending): `ls -lS` * Long format list of all files, sorted by modification date (oldest first): `ls -ltr` * Only list directories: `ls -d */`
renice
The renice utility shall request that the nice values (see the Base Definitions volume of POSIX.1‐2017, Section 3.244, Nice Value) of one or more running processes be changed. By default, the applicable processes are specified by their process IDs. When a process group is specified (see -g), the request shall apply to all processes in the process group. The nice value shall be bounded in an implementation-defined manner. If the requested increment would raise or lower the nice value of the executed utility beyond implementation-defined limits, then the limit whose value was exceeded shall be used. When a user is reniced, the request applies to all processes whose saved set-user-ID matches the user ID corresponding to the user. Regardless of which options are supplied or any other factor, renice shall not alter the nice values of any process unless the user requesting such a change has appropriate privileges to do so for the specified process. If the user lacks appropriate privileges to perform the requested action, the utility shall return an error status. The saved set-user-ID of the user's process shall be checked instead of its effective user ID when renice attempts to determine the user ID of the process in order to determine whether the user has appropriate privileges. The renice utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines, except for Guideline 9. The following options shall be supported: -g Interpret the following operands as unsigned decimal integer process group IDs. -n increment Specify how the nice value of the specified process or processes is to be adjusted. The increment option- argument is a positive or negative decimal integer that shall be used to modify the nice value of the specified process or processes. Positive increment values shall cause a lower nice value. Negative increment values may require appropriate privileges and shall cause a higher nice value. -p Interpret the following operands as unsigned decimal integer process IDs. The -p option is the default if no options are specified. -u Interpret the following operands as users. If a user exists with a user name equal to the operand, then the user ID of that user is used in further processing. Otherwise, if the operand represents an unsigned decimal integer, it shall be used as the numeric user ID of the user.
# renice > Alters the scheduling priority/niceness of one or more running processes. > Niceness values range from -20 (most favorable to the process) to 19 (least > favorable to the process). More information: https://manned.org/renice. * Change priority of a running process: `renice -n {{niceness_value}} -p {{pid}}` * Change priority of all processes owned by a user: `renice -n {{niceness_value}} -u {{user}}` * Change priority of all processes that belong to a process group: `renice -n {{niceness_value}} --pgrp {{process_group}}`
groups
The groups command displays the current group names or ID values. If the value does not have a corresponding entry in /etc/group, the value will be displayed as the numerical group value. The optional user parameter will display the groups for the named user.
# groups > Print group memberships for a user. See also: `groupadd`, `groupdel`, > `groupmod`. More information: https://www.gnu.org/software/coreutils/groups. * Print group memberships for the current user: `groups` * Print group memberships for a list of users: `groups {{username1 username2 ...}}`
comm
The comm utility shall read file1 and file2, which should be ordered in the current collating sequence, and produce three text columns as output: lines only in file1, lines only in file2, and lines in both files. If the lines in both files are not ordered according to the collating sequence of the current locale, the results are unspecified. If the collating sequence of the current locale does not have a total ordering of all characters (see the Base Definitions volume of POSIX.1‐2017, Section 7.3.2, LC_COLLATE) and any lines from the input files collate equally but are not identical, comm should treat them as different lines but may treat them as being the same. If it treats them as different, comm should expect them to be ordered according to a further byte-by-byte comparison using the collating sequence for the POSIX locale and if they are not ordered in this way, the output of comm can identify such lines as being both unique to file1 and unique to file2 instead of being in both files. The comm utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following options shall be supported: -1 Suppress the output column of lines unique to file1. -2 Suppress the output column of lines unique to file2. -3 Suppress the output column of lines duplicated in file1 and file2.
# comm > Select or reject lines common to two files. Both files must be sorted. More > information: https://www.gnu.org/software/coreutils/comm. * Produce three tab-separated columns: lines only in first file, lines only in second file and common lines: `comm {{file1}} {{file2}}` * Print only lines common to both files: `comm -12 {{file1}} {{file2}}` * Print only lines common to both files, reading one file from `stdin`: `cat {{file1}} | comm -12 - {{file2}}` * Get lines only found in first file, saving the result to a third file: `comm -23 {{file1}} {{file2}} > {{file1_only}}` * Print lines only found in second file, when the files aren't sorted: `comm -13 <(sort {{file1}}) <(sort {{file2}})`
iostat
The iostat command is used for monitoring system input/output device loading by observing the time the devices are active in relation to their average transfer rates. The iostat command generates reports that can be used to change system configuration to better balance the input/output load between physical disks. The first report generated by the iostat command provides statistics concerning the time since the system was booted, unless the -y option is used (in this case, this first report is omitted). Each subsequent report covers the time since the previous report. All statistics are reported each time the iostat command is run. The report consists of a CPU header row followed by a row of CPU statistics. On multiprocessor systems, CPU statistics are calculated system-wide as averages among all processors. A device header row is displayed followed by a line of statistics for each device that is configured. The interval parameter specifies the amount of time in seconds between each report. The count parameter can be specified in conjunction with the interval parameter. If the count parameter is specified, the value of count determines the number of reports generated at interval seconds apart. If the interval parameter is specified without the count parameter, the iostat command generates reports continuously. -c Display the CPU utilization report. --compact Don't break the Device Utilization Report into sub-reports so that all the metrics get displayed on a single line. -d Display the device utilization report. --dec={ 0 | 1 | 2 } Specify the number of decimal places to use (0 to 2, default value is 2). -f directory +f directory Specify an alternative directory for iostat to read devices statistics. Option -f tells iostat to use only the files located in the alternative directory, whereas option +f tells it to use both the standard kernel files and the files located in the alternative directory to read device statistics. directory is a directory containing files with statistics for devices managed in userspace. It may contain: - a "diskstats" file whose format is compliant with that located in "/proc", - statistics for individual devices contained in files whose format is compliant with that of files located in "/sys". In particular, the following files located in directory may be used by iostat: directory/block/device/stat directory/block/device/partition/stat partition files must have an entry in directory/dev/block/ directory, e.g.: directory/dev/block/major:minor --> ../../block/device/partition -g group_name { device [...] | ALL } Display statistics for a group of devices. The iostat command reports statistics for each individual device in the list then a line of global statistics for the group displayed as group_name and made up of all the devices in the list. The ALL keyword means that all the block devices defined by the system shall be included in the group. -H This option must be used with option -g and indicates that only global statistics for the group are to be displayed, and not statistics for individual devices in the group. -h This option is equivalent to specifying --human --pretty. --human Print sizes in human readable format (e.g. 1.0k, 1.2M, etc.) The units displayed with this option supersede any other default units (e.g. kilobytes, sectors...) associated with the metrics. -j { ID | LABEL | PATH | UUID | ... } [ device [...] | ALL ] Display persistent device names. Keywords ID, LABEL, etc. specify the type of the persistent name. These keywords are not limited, only prerequisite is that directory with required persistent names is present in /dev/disk. Optionally, multiple devices can be specified in the chosen persistent name type. Because persistent device names are usually long, option --pretty is implicitly set with this option. -k Display statistics in kilobytes per second. -m Display statistics in megabytes per second. -N Display the registered device mapper names for any device mapper devices. Useful for viewing LVM2 statistics. -o JSON Display the statistics in JSON (JavaScript Object Notation) format. JSON output field order is undefined, and new fields may be added in the future. -p [ { device[,...] | ALL } ] Display statistics for block devices and all their partitions that are used by the system. If a device name is entered on the command line, then statistics for it and all its partitions are displayed. Last, the ALL keyword indicates that statistics have to be displayed for all the block devices and partitions defined by the system, including those that have never been used. If option -j is defined before this option, devices entered on the command line can be specified with the chosen persistent name type. --pretty Make the Device Utilization Report easier to read by a human. The device name will be printed on the right side. The report may also be broken into sub-reports if there are many metrics to display (use --compact option to prevent this). -s Display a short (narrow) version of the report that should fit in 80 characters wide screens. -t Print the time for each report displayed. The timestamp format may depend on the value of the S_TIME_FORMAT environment variable (see below). -V Print version number then exit. -x Display extended statistics. -y Omit first report with statistics since system boot, if displaying multiple records at given interval. -z Tell iostat to omit output for any devices for which there was no activity during the sample period.
# iostat > Report statistics for devices and partitions. More information: > https://manned.org/iostat. * Display a report of CPU and disk statistics since system startup: `iostat` * Display a report of CPU and disk statistics with units converted to megabytes: `iostat -m` * Display CPU statistics: `iostat -c` * Display disk statistics with disk names (including LVM): `iostat -N` * Display extended disk statistics with disk names for device "sda": `iostat -xN {{sda}}` * Display incremental reports of CPU and disk statistics every 2 seconds: `iostat {{2}}`
pathchk
Diagnose invalid or unportable file names. -p check for most POSIX systems -P check for empty names and leading "-" --portability check for all POSIX systems (equivalent to -p -P) --help display this help and exit --version output version information and exit
# pathchk > Check the validity and portability of one or more pathnames. More > information: https://www.gnu.org/software/coreutils/pathchk. * Check pathnames for validity in the current system: `pathchk {{path1 path2 …}}` * Check pathnames for validity on a wider range of POSIX compliant systems: `pathchk -p {{path1 path2 …}}` * Check pathnames for validity on all POSIX compliant systems: `pathchk --portability {{path1 path2 …}}` * Only check for empty pathnames or leading dashes (-): `pathchk -P {{path1 path2 …}}`
git-tag
Add a tag reference in refs/tags/, unless -d/-l/-v is given to delete, list or verify tags. Unless -f is given, the named tag must not yet exist. If one of -a, -s, or -u <key-id> is passed, the command creates a tag object, and requires a tag message. Unless -m <msg> or -F <file> is given, an editor is started for the user to type in the tag message. If -m <msg> or -F <file> is given and -a, -s, and -u <key-id> are absent, -a is implied. Otherwise, a tag reference that points directly at the given object (i.e., a lightweight tag) is created. A GnuPG signed tag object will be created when -s or -u <key-id> is used. When -u <key-id> is not used, the committer identity for the current user is used to find the GnuPG key for signing. The configuration variable gpg.program is used to specify custom GnuPG binary. Tag objects (created with -a, -s, or -u) are called "annotated" tags; they contain a creation date, the tagger name and e-mail, a tagging message, and an optional GnuPG signature. Whereas a "lightweight" tag is simply a name for an object (usually a commit object). Annotated tags are meant for release while lightweight tags are meant for private or temporary object labels. For this reason, some git commands for naming objects (like git describe) will ignore lightweight tags by default. -a, --annotate Make an unsigned, annotated tag object -s, --sign Make a GPG-signed tag, using the default e-mail address’s key. The default behavior of tag GPG-signing is controlled by tag.gpgSign configuration variable if it exists, or disabled otherwise. See git-config(1). --no-sign Override tag.gpgSign configuration variable that is set to force each and every tag to be signed. -u <key-id>, --local-user=<key-id> Make a GPG-signed tag, using the given key. -f, --force Replace an existing tag with the given name (instead of failing) -d, --delete Delete existing tags with the given names. -v, --verify Verify the GPG signature of the given tag names. -n<num> <num> specifies how many lines from the annotation, if any, are printed when using -l. Implies --list. The default is not to print any annotation lines. If no number is given to -n, only the first line is printed. If the tag is not annotated, the commit message is displayed instead. -l, --list List tags. With optional <pattern>..., e.g. git tag --list 'v-*', list only the tags that match the pattern(s). Running "git tag" without arguments also lists all tags. The pattern is a shell wildcard (i.e., matched using fnmatch(3)). Multiple patterns may be given; if any of them matches, the tag is shown. This option is implicitly supplied if any other list-like option such as --contains is provided. See the documentation for each of those options for details. --sort=<key> Sort based on the key given. Prefix - to sort in descending order of the value. You may use the --sort=<key> option multiple times, in which case the last key becomes the primary key. Also supports "version:refname" or "v:refname" (tag names are treated as versions). The "version:refname" sort order can also be affected by the "versionsort.suffix" configuration variable. The keys supported are the same as those in git for-each-ref. Sort order defaults to the value configured for the tag.sort variable if it exists, or lexicographic order otherwise. See git-config(1). --color[=<when>] Respect any colors specified in the --format option. The <when> field must be one of always, never, or auto (if <when> is absent, behave as if always was given). -i, --ignore-case Sorting and filtering tags are case insensitive. --omit-empty Do not print a newline after formatted refs where the format expands to the empty string. --column[=<options>], --no-column Display tag listing in columns. See configuration variable column.tag for option syntax. --column and --no-column without options are equivalent to always and never respectively. This option is only applicable when listing tags without annotation lines. --contains [<commit>] Only list tags which contain the specified commit (HEAD if not specified). Implies --list. --no-contains [<commit>] Only list tags which don’t contain the specified commit (HEAD if not specified). Implies --list. --merged [<commit>] Only list tags whose commits are reachable from the specified commit (HEAD if not specified). --no-merged [<commit>] Only list tags whose commits are not reachable from the specified commit (HEAD if not specified). --points-at <object> Only list tags of the given object (HEAD if not specified). Implies --list. -m <msg>, --message=<msg> Use the given tag message (instead of prompting). If multiple -m options are given, their values are concatenated as separate paragraphs. Implies -a if none of -a, -s, or -u <key-id> is given. -F <file>, --file=<file> Take the tag message from the given file. Use - to read the message from the standard input. Implies -a if none of -a, -s, or -u <key-id> is given. -e, --edit The message taken from file with -F and command line with -m are usually used as the tag message unmodified. This option lets you further edit the message taken from these sources. --cleanup=<mode> This option sets how the tag message is cleaned up. The <mode> can be one of verbatim, whitespace and strip. The strip mode is default. The verbatim mode does not change message at all, whitespace removes just leading/trailing whitespace lines and strip removes both whitespace and commentary. --create-reflog Create a reflog for the tag. To globally enable reflogs for tags, see core.logAllRefUpdates in git-config(1). The negated form --no-create-reflog only overrides an earlier --create-reflog, but currently does not negate the setting of core.logAllRefUpdates. --format=<format> A string that interpolates %(fieldname) from a tag ref being shown and the object it points at. The format is the same as that of git-for-each-ref(1). When unspecified, defaults to %(refname:strip=2). <tagname> The name of the tag to create, delete, or describe. The new tag name must pass all checks defined by git-check-ref-format(1). Some of these checks may restrict the characters allowed in a tag name. <commit>, <object> The object that the new tag will refer to, usually a commit. Defaults to HEAD.
# git tag > Create, list, delete or verify tags. A tag is a static reference to a > specific commit. More information: https://git-scm.com/docs/git-tag. * List all tags: `git tag` * Create a tag with the given name pointing to the current commit: `git tag {{tag_name}}` * Create a tag with the given name pointing to a given commit: `git tag {{tag_name}} {{commit}}` * Create an annotated tag with the given message: `git tag {{tag_name}} -m {{tag_message}}` * Delete the tag with the given name: `git tag -d {{tag_name}}` * Get updated tags from upstream: `git fetch --tags` * List all tags whose ancestors include a given commit: `git tag --contains {{commit}}`
last
last looks through the file wtmp (which records all logins/logouts) and prints information about connect times of users. Records are printed from most recent to least recent. Records can be specified by tty and username. tty names can be abbreviated: last 0 is equivalent to last tty0. Multiple arguments can be specified: last root console will print all of the entries for the user root and all entries logged in on the console tty. The special users reboot and shutdown log in when the system reboots or (surprise) shuts down. last reboot will produce a record of reboot times. If last is interrupted by a quit signal, it prints out how far its search in the wtmp file had reached and then quits. -n num, --lines num Limit the number of lines that last outputs. This is different from u*x last, which lets you specify the number right after a dash. -f filename, --file filename Read from the file filename instead of the system's wtmp file. --complain When the wtmp file has a problem (a time-warp, missing record, or whatever), print out an appropriate error. --tw-leniency num Set the time warp leniency to num seconds. Records in wtmp files might be slightly out of order (most notably when two logins occur within a one-second period - the second one gets written first). By default, this value is set to 60. If the program notices this problem, time is not assigned to users unless the --timewarps flag is used. --tw-suspicious num Set the time warp suspicious value to num seconds. If two records in the wtmp file are farther than this number of seconds apart, there is a problem with the wtmp file (or your machine hasn't been used in a year). If the program notices this problem, time is not assigned to users unless the --timewarps flag is used. --no-truncate-ftp-entries When printing out the information, don't chop the number part off of `ftp'XXXX entries. -x, --more-records Print out run level changes, shutdowns, and time changes in addition to the normal records. -a, --all-records Print out all records in the wtmp file. -i, --ip-address Some machines store the IP address of a connection in a utmp record. Enabling this option makes last print the IP address instead of the hostname. -w, --wide By default, last tries to print each entry within in 80 columns. Use this option to instruct last to print out the fields in the wtmp file with full field widths. --debug Print verbose internal information. -s, --print-seconds Print seconds when displaying dates. -y, --print-year Print year when displaying dates. -V, --version Print last's version number. -h, --help Prints the usage string and default locations of system files to standard output and exits.
# last > View the last logged in users. More information: https://manned.org/last. * View last logins, their duration and other information as read from `/var/log/wtmp`: `last` * Specify how many of the last logins to show: `last -n {{login_count}}` * Print the full date and time for entries and then display the hostname column last to prevent truncation: `last -F -a` * View all logins by a specific user and show the IP address instead of the hostname: `last {{username}} -i` * View all recorded reboots (i.e., the last logins of the pseudo user "reboot"): `last reboot` * View all recorded shutdowns (i.e., the last logins of the pseudo user "shutdown"): `last shutdown`
git-fetch
Fetch branches and/or tags (collectively, "refs") from one or more other repositories, along with the objects necessary to complete their histories. Remote-tracking branches are updated (see the description of <refspec> below for ways to control this behavior). By default, any tag that points into the histories being fetched is also fetched; the effect is to fetch tags that point at branches that you are interested in. This default behavior can be changed by using the --tags or --no-tags options or by configuring remote.<name>.tagOpt. By using a refspec that fetches tags explicitly, you can fetch tags that do not point into branches you are interested in as well. git fetch can fetch from either a single named repository or URL, or from several repositories at once if <group> is given and there is a remotes.<group> entry in the configuration file. (See git-config(1)). When no remote is specified, by default the origin remote will be used, unless there’s an upstream branch configured for the current branch. The names of refs that are fetched, together with the object names they point at, are written to .git/FETCH_HEAD. This information may be used by scripts or other git commands, such as git-pull(1). --all Fetch all remotes. -a, --append Append ref names and object names of fetched refs to the existing contents of .git/FETCH_HEAD. Without this option old data in .git/FETCH_HEAD will be overwritten. --atomic Use an atomic transaction to update local refs. Either all refs are updated, or on error, no refs are updated. --depth=<depth> Limit fetching to the specified number of commits from the tip of each remote branch history. If fetching to a shallow repository created by git clone with --depth=<depth> option (see git-clone(1)), deepen or shorten the history to the specified number of commits. Tags for the deepened commits are not fetched. --deepen=<depth> Similar to --depth, except it specifies the number of commits from the current shallow boundary instead of from the tip of each remote branch history. --shallow-since=<date> Deepen or shorten the history of a shallow repository to include all reachable commits after <date>. --shallow-exclude=<revision> Deepen or shorten the history of a shallow repository to exclude commits reachable from a specified remote branch or tag. This option can be specified multiple times. --unshallow If the source repository is complete, convert a shallow repository to a complete one, removing all the limitations imposed by shallow repositories. If the source repository is shallow, fetch as much as possible so that the current repository has the same history as the source repository. --update-shallow By default when fetching from a shallow repository, git fetch refuses refs that require updating .git/shallow. This option updates .git/shallow and accept such refs. --negotiation-tip=<commit|glob> By default, Git will report, to the server, commits reachable from all local refs to find common commits in an attempt to reduce the size of the to-be-received packfile. If specified, Git will only report commits reachable from the given tips. This is useful to speed up fetches when the user knows which local ref is likely to have commits in common with the upstream ref being fetched. This option may be specified more than once; if so, Git will report commits reachable from any of the given commits. The argument to this option may be a glob on ref names, a ref, or the (possibly abbreviated) SHA-1 of a commit. Specifying a glob is equivalent to specifying this option multiple times, one for each matching ref name. See also the fetch.negotiationAlgorithm and push.negotiate configuration variables documented in git-config(1), and the --negotiate-only option below. --negotiate-only Do not fetch anything from the server, and instead print the ancestors of the provided --negotiation-tip=* arguments, which we have in common with the server. This is incompatible with --recurse-submodules=[yes|on-demand]. Internally this is used to implement the push.negotiate option, see git-config(1). --dry-run Show what would be done, without making any changes. --porcelain Print the output to standard output in an easy-to-parse format for scripts. See section OUTPUT in git-fetch(1) for details. This is incompatible with --recurse-submodules=[yes|on-demand] and takes precedence over the fetch.output config option. --[no-]write-fetch-head Write the list of remote refs fetched in the FETCH_HEAD file directly under $GIT_DIR. This is the default. Passing --no-write-fetch-head from the command line tells Git not to write the file. Under --dry-run option, the file is never written. -f, --force When git fetch is used with <src>:<dst> refspec it may refuse to update the local branch as discussed in the <refspec> part below. This option overrides that check. -k, --keep Keep downloaded pack. --multiple Allow several <repository> and <group> arguments to be specified. No <refspec>s may be specified. --[no-]auto-maintenance, --[no-]auto-gc Run git maintenance run --auto at the end to perform automatic repository maintenance if needed. (--[no-]auto-gc is a synonym.) This is enabled by default. --[no-]write-commit-graph Write a commit-graph after fetching. This overrides the config setting fetch.writeCommitGraph. --prefetch Modify the configured refspec to place all refs into the refs/prefetch/ namespace. See the prefetch task in git-maintenance(1). -p, --prune Before fetching, remove any remote-tracking references that no longer exist on the remote. Tags are not subject to pruning if they are fetched only because of the default tag auto-following or due to a --tags option. However, if tags are fetched due to an explicit refspec (either on the command line or in the remote configuration, for example if the remote was cloned with the --mirror option), then they are also subject to pruning. Supplying --prune-tags is a shorthand for providing the tag refspec. See the PRUNING section below for more details. -P, --prune-tags Before fetching, remove any local tags that no longer exist on the remote if --prune is enabled. This option should be used more carefully, unlike --prune it will remove any local references (local tags) that have been created. This option is a shorthand for providing the explicit tag refspec along with --prune, see the discussion about that in its documentation. See the PRUNING section below for more details. -n, --no-tags By default, tags that point at objects that are downloaded from the remote repository are fetched and stored locally. This option disables this automatic tag following. The default behavior for a remote may be specified with the remote.<name>.tagOpt setting. See git-config(1). --refetch Instead of negotiating with the server to avoid transferring commits and associated objects that are already present locally, this option fetches all objects as a fresh clone would. Use this to reapply a partial clone filter from configuration or using --filter= when the filter definition has changed. Automatic post-fetch maintenance will perform object database pack consolidation to remove any duplicate objects. --refmap=<refspec> When fetching refs listed on the command line, use the specified refspec (can be given more than once) to map the refs to remote-tracking branches, instead of the values of remote.*.fetch configuration variables for the remote repository. Providing an empty <refspec> to the --refmap option causes Git to ignore the configured refspecs and rely entirely on the refspecs supplied as command-line arguments. See section on "Configured Remote-tracking Branches" for details. -t, --tags Fetch all tags from the remote (i.e., fetch remote tags refs/tags/* into local tags with the same name), in addition to whatever else would otherwise be fetched. Using this option alone does not subject tags to pruning, even if --prune is used (though tags may be pruned anyway if they are also the destination of an explicit refspec; see --prune). --recurse-submodules[=yes|on-demand|no] This option controls if and under what conditions new commits of submodules should be fetched too. When recursing through submodules, git fetch always attempts to fetch "changed" submodules, that is, a submodule that has commits that are referenced by a newly fetched superproject commit but are missing in the local submodule clone. A changed submodule can be fetched as long as it is present locally e.g. in $GIT_DIR/modules/ (see gitsubmodules(7)); if the upstream adds a new submodule, that submodule cannot be fetched until it is cloned e.g. by git submodule update. When set to on-demand, only changed submodules are fetched. When set to yes, all populated submodules are fetched and submodules that are both unpopulated and changed are fetched. When set to no, submodules are never fetched. When unspecified, this uses the value of fetch.recurseSubmodules if it is set (see git-config(1)), defaulting to on-demand if unset. When this option is used without any value, it defaults to yes. -j, --jobs=<n> Number of parallel children to be used for all forms of fetching. If the --multiple option was specified, the different remotes will be fetched in parallel. If multiple submodules are fetched, they will be fetched in parallel. To control them independently, use the config settings fetch.parallel and submodule.fetchJobs (see git-config(1)). Typically, parallel recursive and multi-remote fetches will be faster. By default fetches are performed sequentially, not in parallel. --no-recurse-submodules Disable recursive fetching of submodules (this has the same effect as using the --recurse-submodules=no option). --set-upstream If the remote is fetched successfully, add upstream (tracking) reference, used by argument-less git-pull(1) and other commands. For more information, see branch.<name>.merge and branch.<name>.remote in git-config(1). --submodule-prefix=<path> Prepend <path> to paths printed in informative messages such as "Fetching submodule foo". This option is used internally when recursing over submodules. --recurse-submodules-default=[yes|on-demand] This option is used internally to temporarily provide a non-negative default value for the --recurse-submodules option. All other methods of configuring fetch’s submodule recursion (such as settings in gitmodules(5) and git-config(1)) override this option, as does specifying --[no-]recurse-submodules directly. -u, --update-head-ok By default git fetch refuses to update the head which corresponds to the current branch. This flag disables the check. This is purely for the internal use for git pull to communicate with git fetch, and unless you are implementing your own Porcelain you are not supposed to use it. --upload-pack <upload-pack> When given, and the repository to fetch from is handled by git fetch-pack, --exec=<upload-pack> is passed to the command to specify non-default path for the command run on the other end. -q, --quiet Pass --quiet to git-fetch-pack and silence any other internally used git commands. Progress is not reported to the standard error stream. -v, --verbose Be verbose. --progress Progress status is reported on the standard error stream by default when it is attached to a terminal, unless -q is specified. This flag forces progress status even if the standard error stream is not directed to a terminal. -o <option>, --server-option=<option> Transmit the given string to the server when communicating using protocol version 2. The given string must not contain a NUL or LF character. The server’s handling of server options, including unknown ones, is server-specific. When multiple --server-option=<option> are given, they are all sent to the other side in the order listed on the command line. --show-forced-updates By default, git checks if a branch is force-updated during fetch. This can be disabled through fetch.showForcedUpdates, but the --show-forced-updates option guarantees this check occurs. See git-config(1). --no-show-forced-updates By default, git checks if a branch is force-updated during fetch. Pass --no-show-forced-updates or set fetch.showForcedUpdates to false to skip this check for performance reasons. If used during git-pull the --ff-only option will still check for forced updates before attempting a fast-forward update. See git-config(1). -4, --ipv4 Use IPv4 addresses only, ignoring IPv6 addresses. -6, --ipv6 Use IPv6 addresses only, ignoring IPv4 addresses. <repository> The "remote" repository that is the source of a fetch or pull operation. This parameter can be either a URL (see the section GIT URLS below) or the name of a remote (see the section REMOTES below). <group> A name referring to a list of repositories as the value of remotes.<group> in the configuration file. (See git-config(1)). <refspec> Specifies which refs to fetch and which local refs to update. When no <refspec>s appear on the command line, the refs to fetch are read from remote.<repository>.fetch variables instead (see CONFIGURED REMOTE-TRACKING BRANCHES below). The format of a <refspec> parameter is an optional plus +, followed by the source <src>, followed by a colon :, followed by the destination ref <dst>. The colon can be omitted when <dst> is empty. <src> is typically a ref, but it can also be a fully spelled hex object name. A <refspec> may contain a * in its <src> to indicate a simple pattern match. Such a refspec functions like a glob that matches any ref with the same prefix. A pattern <refspec> must have a * in both the <src> and <dst>. It will map refs to the destination by replacing the * with the contents matched from the source. If a refspec is prefixed by ^, it will be interpreted as a negative refspec. Rather than specifying which refs to fetch or which local refs to update, such a refspec will instead specify refs to exclude. A ref will be considered to match if it matches at least one positive refspec, and does not match any negative refspec. Negative refspecs can be useful to restrict the scope of a pattern refspec so that it will not include specific refs. Negative refspecs can themselves be pattern refspecs. However, they may only contain a <src> and do not specify a <dst>. Fully spelled out hex object names are also not supported. tag <tag> means the same as refs/tags/<tag>:refs/tags/<tag>; it requests fetching everything up to the given tag. The remote ref that matches <src> is fetched, and if <dst> is not an empty string, an attempt is made to update the local ref that matches it. Whether that update is allowed without --force depends on the ref namespace it’s being fetched to, the type of object being fetched, and whether the update is considered to be a fast-forward. Generally, the same rules apply for fetching as when pushing, see the <refspec>... section of git-push(1) for what those are. Exceptions to those rules particular to git fetch are noted below. Until Git version 2.20, and unlike when pushing with git-push(1), any updates to refs/tags/* would be accepted without + in the refspec (or --force). When fetching, we promiscuously considered all tag updates from a remote to be forced fetches. Since Git version 2.20, fetching to update refs/tags/* works the same way as when pushing. I.e. any updates will be rejected without + in the refspec (or --force). Unlike when pushing with git-push(1), any updates outside of refs/{tags,heads}/* will be accepted without + in the refspec (or --force), whether that’s swapping e.g. a tree object for a blob, or a commit for another commit that’s doesn’t have the previous commit as an ancestor etc. Unlike when pushing with git-push(1), there is no configuration which’ll amend these rules, and nothing like a pre-fetch hook analogous to the pre-receive hook. As with pushing with git-push(1), all of the rules described above about what’s not allowed as an update can be overridden by adding an the optional leading + to a refspec (or using --force command line option). The only exception to this is that no amount of forcing will make the refs/heads/* namespace accept a non-commit object. Note When the remote branch you want to fetch is known to be rewound and rebased regularly, it is expected that its new tip will not be descendant of its previous tip (as stored in your remote-tracking branch the last time you fetched). You would want to use the + sign to indicate non-fast-forward updates will be needed for such branches. There is no way to determine or declare that a branch will be made available in a repository with this behavior; the pulling user simply must know this is the expected usage pattern for a branch. --stdin Read refspecs, one per line, from stdin in addition to those provided as arguments. The "tag <name>" format is not supported.
# git fetch > Download objects and refs from a remote repository. More information: > https://git-scm.com/docs/git-fetch. * Fetch the latest changes from the default remote upstream repository (if set): `git fetch` * Fetch new branches from a specific remote upstream repository: `git fetch {{remote_name}}` * Fetch the latest changes from all remote upstream repositories: `git fetch --all` * Also fetch tags from the remote upstream repository: `git fetch --tags` * Delete local references to remote branches that have been deleted upstream: `git fetch --prune`
xargs
The xargs utility shall construct a command line consisting of the utility and argument operands specified followed by as many arguments read in sequence from standard input as fit in length and number constraints specified by the options. The xargs utility shall then invoke the constructed command line and wait for its completion. This sequence shall be repeated until one of the following occurs: * An end-of-file condition is detected on standard input. * An argument consisting of just the logical end-of-file string (see the -E eofstr option) is found on standard input after double-quote processing, <apostrophe> processing, and <backslash>-escape processing (see next paragraph). All arguments up to but not including the argument consisting of just the logical end-of-file string shall be used as arguments in constructed command lines. * An invocation of a constructed command line returns an exit status of 255. The application shall ensure that arguments in the standard input are separated by unquoted <blank> characters, unescaped <blank> characters, or <newline> characters. A string of zero or more non-double-quote ('"') characters and non-<newline> characters can be quoted by enclosing them in double-quotes. A string of zero or more non-<apostrophe> ('\'') characters and non-<newline> characters can be quoted by enclosing them in <apostrophe> characters. Any unquoted character can be escaped by preceding it with a <backslash>. The utility named by utility shall be executed one or more times until the end-of-file is reached or the logical end-of file string is found. The results are unspecified if the utility named by utility attempts to read from its standard input. The generated command line length shall be the sum of the size in bytes of the utility name and each argument treated as strings, including a null byte terminator for each of these strings. The xargs utility shall limit the command line length such that when the command line is invoked, the combined argument and environment lists (see the exec family of functions in the System Interfaces volume of POSIX.1‐2017) shall not exceed {ARG_MAX}-2048 bytes. Within this constraint, if neither the -n nor the -s option is specified, the default command line length shall be at least {LINE_MAX}. The xargs utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following options shall be supported: -E eofstr Use eofstr as the logical end-of-file string. If -E is not specified, it is unspecified whether the logical end-of-file string is the <underscore> character ('_') or the end-of-file string capability is disabled. When eofstr is the null string, the logical end-of-file string capability shall be disabled and <underscore> characters shall be taken literally. -I replstr Insert mode: utility is executed for each logical line from standard input. Arguments in the standard input shall be separated only by unescaped <newline> characters, not by <blank> characters. Any unquoted unescaped <blank> characters at the beginning of each line shall be ignored. The resulting argument shall be inserted in arguments in place of each occurrence of replstr. At least five arguments in arguments can each contain one or more instances of replstr. Each of these constructed arguments cannot grow larger than an implementation-defined limit greater than or equal to 255 bytes. Option -x shall be forced on. -L number The utility shall be executed for each non-empty number lines of arguments from standard input. The last invocation of utility shall be with fewer lines of arguments if fewer than number remain. A line is considered to end with the first <newline> unless the last character of the line is an unescaped <blank>; a trailing unescaped <blank> signals continuation to the next non-empty line, inclusive. -n number Invoke utility using as many standard input arguments as possible, up to number (a positive decimal integer) arguments maximum. Fewer arguments shall be used if: * The command line length accumulated exceeds the size specified by the -s option (or {LINE_MAX} if there is no -s option). * The last iteration has fewer than number, but not zero, operands remaining. -p Prompt mode: the user is asked whether to execute utility at each invocation. Trace mode (-t) is turned on to write the command instance to be executed, followed by a prompt to standard error. An affirmative response read from /dev/tty shall execute the command; otherwise, that particular invocation of utility shall be skipped. -s size Invoke utility using as many standard input arguments as possible yielding a command line length less than size (a positive decimal integer) bytes. Fewer arguments shall be used if: * The total number of arguments exceeds that specified by the -n option. * The total number of lines exceeds that specified by the -L option. * End-of-file is encountered on standard input before size bytes are accumulated. Values of size up to at least {LINE_MAX} bytes shall be supported, provided that the constraints specified in the DESCRIPTION are met. It shall not be considered an error if a value larger than that supported by the implementation or exceeding the constraints specified in the DESCRIPTION is given; xargs shall use the largest value it supports within the constraints. -t Enable trace mode. Each generated command line shall be written to standard error just prior to invocation. -x Terminate if a constructed command line will not fit in the implied or specified size (see the -s option above).
# xargs > Execute a command with piped arguments coming from another command, a file, > etc. The input is treated as a single block of text and split into separate > pieces on spaces, tabs, newlines and end-of-file. More information: > https://pubs.opengroup.org/onlinepubs/9699919799/utilities/xargs.html. * Run a command using the input data as arguments: `{{arguments_source}} | xargs {{command}}` * Run multiple chained commands on the input data: `{{arguments_source}} | xargs sh -c "{{command1}} && {{command2}} | {{command3}}"` * Delete all files with a `.backup` extension (`-print0` uses a null character to split file names, and `-0` uses it as delimiter): `find . -name {{'*.backup'}} -print0 | xargs -0 rm -v` * Execute the command once for each input line, replacing any occurrences of the placeholder (here marked as `_`) with the input line: `{{arguments_source}} | xargs -I _ {{command}} _ {{optional_extra_arguments}}` * Parallel runs of up to `max-procs` processes at a time; the default is 1. If `max-procs` is 0, xargs will run as many processes as possible at a time: `{{arguments_source}} | xargs -P {{max-procs}} {{command}}`
jobs
The jobs utility shall display the status of jobs that were started in the current shell environment; see Section 2.12, Shell Execution Environment. When jobs reports the termination status of a job, the shell shall remove its process ID from the list of those ``known in the current shell execution environment''; see Section 2.9.3.1, Examples. The jobs utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following options shall be supported: -l (The letter ell.) Provide more information about each job listed. This information shall include the job number, current job, process group ID, state, and the command that formed the job. -p Display only the process IDs for the process group leaders of the selected jobs. By default, the jobs utility shall display the status of all stopped jobs, running background jobs and all jobs whose status has changed and have not been reported by the shell.
# jobs > Display status of jobs in the current session. More information: > https://manned.org/jobs. * Show status of all jobs: `jobs` * Show status of a particular job: `jobs %{{job_id}}` * Show status and process IDs of all jobs: `jobs -l` * Show process IDs of all jobs: `jobs -p`
objdump
objdump displays information about one or more object files. The options control what particular information to display. This information is mostly useful to programmers who are working on the compilation tools, as opposed to programmers who just want their program to compile and work. objfile... are the object files to be examined. When you specify archives, objdump shows information on each of the member object files. The long and short forms of options, shown here as alternatives, are equivalent. At least one option from the list -a,-d,-D,-e,-f,-g,-G,-h,-H,-p,-P,-r,-R,-s,-S,-t,-T,-V,-x must be given. -a --archive-header If any of the objfile files are archives, display the archive header information (in a format similar to ls -l). Besides the information you could list with ar tv, objdump -a shows the object file format of each archive member. --adjust-vma=offset When dumping information, first add offset to all the section addresses. This is useful if the section addresses do not correspond to the symbol table, which can happen when putting sections at particular addresses when using a format which can not represent section addresses, such as a.out. -b bfdname --target=bfdname Specify that the object-code format for the object files is bfdname. This option may not be necessary; objdump can automatically recognize many formats. For example, objdump -b oasys -m vax -h fu.o displays summary information from the section headers (-h) of fu.o, which is explicitly identified (-m) as a VAX object file in the format produced by Oasys compilers. You can list the formats available with the -i option. -C --demangle[=style] Decode (demangle) low-level symbol names into user-level names. Besides removing any initial underscore prepended by the system, this makes C++ function names readable. Different compilers have different mangling styles. The optional demangling style argument can be used to choose an appropriate demangling style for your compiler. --recurse-limit --no-recurse-limit --recursion-limit --no-recursion-limit Enables or disables a limit on the amount of recursion performed whilst demangling strings. Since the name mangling formats allow for an infinite level of recursion it is possible to create strings whose decoding will exhaust the amount of stack space available on the host machine, triggering a memory fault. The limit tries to prevent this from happening by restricting recursion to 2048 levels of nesting. The default is for this limit to be enabled, but disabling it may be necessary in order to demangle truly complicated names. Note however that if the recursion limit is disabled then stack exhaustion is possible and any bug reports about such an event will be rejected. -g --debugging Display debugging information. This attempts to parse STABS debugging format information stored in the file and print it out using a C like syntax. If no STABS debugging was found this option falls back on the -W option to print any DWARF information in the file. -e --debugging-tags Like -g, but the information is generated in a format compatible with ctags tool. -d --disassemble --disassemble=symbol Display the assembler mnemonics for the machine instructions from the input file. This option only disassembles those sections which are expected to contain instructions. If the optional symbol argument is given, then display the assembler mnemonics starting at symbol. If symbol is a function name then disassembly will stop at the end of the function, otherwise it will stop when the next symbol is encountered. If there are no matches for symbol then nothing will be displayed. Note if the --dwarf=follow-links option is enabled then any symbol tables in linked debug info files will be read in and used when disassembling. -D --disassemble-all Like -d, but disassemble the contents of all sections, not just those expected to contain instructions. This option also has a subtle effect on the disassembly of instructions in code sections. When option -d is in effect objdump will assume that any symbols present in a code section occur on the boundary between instructions and it will refuse to disassemble across such a boundary. When option -D is in effect however this assumption is supressed. This means that it is possible for the output of -d and -D to differ if, for example, data is stored in code sections. If the target is an ARM architecture this switch also has the effect of forcing the disassembler to decode pieces of data found in code sections as if they were instructions. Note if the --dwarf=follow-links option is enabled then any symbol tables in linked debug info files will be read in and used when disassembling. --no-addresses When disassembling, don't print addresses on each line or for symbols and relocation offsets. In combination with --no-show-raw-insn this may be useful for comparing compiler output. --prefix-addresses When disassembling, print the complete address on each line. This is the older disassembly format. -EB -EL --endian={big|little} Specify the endianness of the object files. This only affects disassembly. This can be useful when disassembling a file format which does not describe endianness information, such as S-records. -f --file-headers Display summary information from the overall header of each of the objfile files. -F --file-offsets When disassembling sections, whenever a symbol is displayed, also display the file offset of the region of data that is about to be dumped. If zeroes are being skipped, then when disassembly resumes, tell the user how many zeroes were skipped and the file offset of the location from where the disassembly resumes. When dumping sections, display the file offset of the location from where the dump starts. --file-start-context Specify that when displaying interlisted source code/disassembly (assumes -S) from a file that has not yet been displayed, extend the context to the start of the file. -h --section-headers --headers Display summary information from the section headers of the object file. File segments may be relocated to nonstandard addresses, for example by using the -Ttext, -Tdata, or -Tbss options to ld. However, some object file formats, such as a.out, do not store the starting address of the file segments. In those situations, although ld relocates the sections correctly, using objdump -h to list the file section headers cannot show the correct addresses. Instead, it shows the usual addresses, which are implicit for the target. Note, in some cases it is possible for a section to have both the READONLY and the NOREAD attributes set. In such cases the NOREAD attribute takes precedence, but objdump will report both since the exact setting of the flag bits might be important. -H --help Print a summary of the options to objdump and exit. -i --info Display a list showing all architectures and object formats available for specification with -b or -m. -j name --section=name Display information only for section name. -L --process-links Display the contents of non-debug sections found in separate debuginfo files that are linked to the main file. This option automatically implies the -WK option, and only sections requested by other command line options will be displayed. -l --line-numbers Label the display (using debugging information) with the filename and source line numbers corresponding to the object code or relocs shown. Only useful with -d, -D, or -r. -m machine --architecture=machine Specify the architecture to use when disassembling object files. This can be useful when disassembling object files which do not describe architecture information, such as S-records. You can list the available architectures with the -i option. For most architectures it is possible to supply an architecture name and a machine name, separated by a colon. For example foo:bar would refer to the bar machine type in the foo architecture. This can be helpful if objdump has been configured to support multiple architectures. If the target is an ARM architecture then this switch has an additional effect. It restricts the disassembly to only those instructions supported by the architecture specified by machine. If it is necessary to use this switch because the input file does not contain any architecture information, but it is also desired to disassemble all the instructions use -marm. -M options --disassembler-options=options Pass target specific information to the disassembler. Only supported on some targets. If it is necessary to specify more than one disassembler option then multiple -M options can be used or can be placed together into a comma separated list. For ARC, dsp controls the printing of DSP instructions, spfp selects the printing of FPX single precision FP instructions, dpfp selects the printing of FPX double precision FP instructions, quarkse_em selects the printing of special QuarkSE-EM instructions, fpuda selects the printing of double precision assist instructions, fpus selects the printing of FPU single precision FP instructions, while fpud selects the printing of FPU double precision FP instructions. Additionally, one can choose to have all the immediates printed in hexadecimal using hex. By default, the short immediates are printed using the decimal representation, while the long immediate values are printed as hexadecimal. cpu=... allows one to enforce a particular ISA when disassembling instructions, overriding the -m value or whatever is in the ELF file. This might be useful to select ARC EM or HS ISA, because architecture is same for those and disassembler relies on private ELF header data to decide if code is for EM or HS. This option might be specified multiple times - only the latest value will be used. Valid values are same as for the assembler -mcpu=... option. If the target is an ARM architecture then this switch can be used to select which register name set is used during disassembler. Specifying -M reg-names-std (the default) will select the register names as used in ARM's instruction set documentation, but with register 13 called 'sp', register 14 called 'lr' and register 15 called 'pc'. Specifying -M reg- names-apcs will select the name set used by the ARM Procedure Call Standard, whilst specifying -M reg-names-raw will just use r followed by the register number. There are also two variants on the APCS register naming scheme enabled by -M reg-names-atpcs and -M reg-names- special-atpcs which use the ARM/Thumb Procedure Call Standard naming conventions. (Either with the normal register names or the special register names). This option can also be used for ARM architectures to force the disassembler to interpret all instructions as Thumb instructions by using the switch --disassembler-options=force-thumb. This can be useful when attempting to disassemble thumb code produced by other compilers. For AArch64 targets this switch can be used to set whether instructions are disassembled as the most general instruction using the -M no-aliases option or whether instruction notes should be generated as comments in the disasssembly using -M notes. For the x86, some of the options duplicate functions of the -m switch, but allow finer grained control. "x86-64" "i386" "i8086" Select disassembly for the given architecture. "intel" "att" Select between intel syntax mode and AT&T syntax mode. "amd64" "intel64" Select between AMD64 ISA and Intel64 ISA. "intel-mnemonic" "att-mnemonic" Select between intel mnemonic mode and AT&T mnemonic mode. Note: "intel-mnemonic" implies "intel" and "att-mnemonic" implies "att". "addr64" "addr32" "addr16" "data32" "data16" Specify the default address size and operand size. These five options will be overridden if "x86-64", "i386" or "i8086" appear later in the option string. "suffix" When in AT&T mode and also for a limited set of instructions when in Intel mode, instructs the disassembler to print a mnemonic suffix even when the suffix could be inferred by the operands or, for certain instructions, the execution mode's defaults. For PowerPC, the -M argument raw selects disasssembly of hardware insns rather than aliases. For example, you will see "rlwinm" rather than "clrlwi", and "addi" rather than "li". All of the -m arguments for gas that select a CPU are supported. These are: 403, 405, 440, 464, 476, 601, 603, 604, 620, 7400, 7410, 7450, 7455, 750cl, 821, 850, 860, a2, booke, booke32, cell, com, e200z2, e200z4, e300, e500, e500mc, e500mc64, e500x2, e5500, e6500, efs, power4, power5, power6, power7, power8, power9, power10, ppc, ppc32, ppc64, ppc64bridge, ppcps, pwr, pwr2, pwr4, pwr5, pwr5x, pwr6, pwr7, pwr8, pwr9, pwr10, pwrx, titan, vle, and future. 32 and 64 modify the default or a prior CPU selection, disabling and enabling 64-bit insns respectively. In addition, altivec, any, lsp, htm, vsx, spe and spe2 add capabilities to a previous or later CPU selection. any will disassemble any opcode known to binutils, but in cases where an opcode has two different meanings or different arguments, you may not see the disassembly you expect. If you disassemble without giving a CPU selection, a default will be chosen from information gleaned by BFD from the object files headers, but the result again may not be as you expect. For MIPS, this option controls the printing of instruction mnemonic names and register names in disassembled instructions. Multiple selections from the following may be specified as a comma separated string, and invalid options are ignored: "no-aliases" Print the 'raw' instruction mnemonic instead of some pseudo instruction mnemonic. I.e., print 'daddu' or 'or' instead of 'move', 'sll' instead of 'nop', etc. "msa" Disassemble MSA instructions. "virt" Disassemble the virtualization ASE instructions. "xpa" Disassemble the eXtended Physical Address (XPA) ASE instructions. "gpr-names=ABI" Print GPR (general-purpose register) names as appropriate for the specified ABI. By default, GPR names are selected according to the ABI of the binary being disassembled. "fpr-names=ABI" Print FPR (floating-point register) names as appropriate for the specified ABI. By default, FPR numbers are printed rather than names. "cp0-names=ARCH" Print CP0 (system control coprocessor; coprocessor 0) register names as appropriate for the CPU or architecture specified by ARCH. By default, CP0 register names are selected according to the architecture and CPU of the binary being disassembled. "hwr-names=ARCH" Print HWR (hardware register, used by the "rdhwr" instruction) names as appropriate for the CPU or architecture specified by ARCH. By default, HWR names are selected according to the architecture and CPU of the binary being disassembled. "reg-names=ABI" Print GPR and FPR names as appropriate for the selected ABI. "reg-names=ARCH" Print CPU-specific register names (CP0 register and HWR names) as appropriate for the selected CPU or architecture. For any of the options listed above, ABI or ARCH may be specified as numeric to have numbers printed rather than names, for the selected types of registers. You can list the available values of ABI and ARCH using the --help option. For VAX, you can specify function entry addresses with -M entry:0xf00ba. You can use this multiple times to properly disassemble VAX binary files that don't contain symbol tables (like ROM dumps). In these cases, the function entry mask would otherwise be decoded as VAX instructions, which would probably lead the rest of the function being wrongly disassembled. -p --private-headers Print information that is specific to the object file format. The exact information printed depends upon the object file format. For some object file formats, no additional information is printed. -P options --private=options Print information that is specific to the object file format. The argument options is a comma separated list that depends on the format (the lists of options is displayed with the help). For XCOFF, the available options are: "header" "aout" "sections" "syms" "relocs" "lineno," "loader" "except" "typchk" "traceback" "toc" "ldinfo" Not all object formats support this option. In particular the ELF format does not use it. -r --reloc Print the relocation entries of the file. If used with -d or -D, the relocations are printed interspersed with the disassembly. -R --dynamic-reloc Print the dynamic relocation entries of the file. This is only meaningful for dynamic objects, such as certain types of shared libraries. As for -r, if used with -d or -D, the relocations are printed interspersed with the disassembly. -s --full-contents Display the full contents of any sections requested. By default all non-empty sections are displayed. -S --source Display source code intermixed with disassembly, if possible. Implies -d. --show-all-symbols When disassembling, show all the symbols that match a given address, not just the first one. --source-comment[=txt] Like the -S option, but all source code lines are displayed with a prefix of txt. Typically txt will be a comment string which can be used to distinguish the assembler code from the source code. If txt is not provided then a default string of "# " (hash followed by a space), will be used. --prefix=prefix Specify prefix to add to the absolute paths when used with -S. --prefix-strip=level Indicate how many initial directory names to strip off the hardwired absolute paths. It has no effect without --prefix=prefix. --show-raw-insn When disassembling instructions, print the instruction in hex as well as in symbolic form. This is the default except when --prefix-addresses is used. --no-show-raw-insn When disassembling instructions, do not print the instruction bytes. This is the default when --prefix-addresses is used. --insn-width=width Display width bytes on a single line when disassembling instructions. --visualize-jumps[=color|=extended-color|=off] Visualize jumps that stay inside a function by drawing ASCII art between the start and target addresses. The optional =color argument adds color to the output using simple terminal colors. Alternatively the =extended-color argument will add color using 8bit colors, but these might not work on all terminals. If it is necessary to disable the visualize-jumps option after it has previously been enabled then use visualize-jumps=off. --disassembler-color=off --disassembler-color=terminal --disassembler-color=on|color|colour --disassembler-color=extened|extended-color|extened-colour Enables or disables the use of colored syntax highlighting in disassembly output. The default behaviour is determined via a configure time option. Note, not all architectures support colored syntax highlighting, and depending upon the terminal used, colored output may not actually be legible. The on argument adds colors using simple terminal colors. The terminal argument does the same, but only if the output device is a terminal. The extended-color argument is similar to the on argument, but it uses 8-bit colors. These may not work on all terminals. The off argument disables colored disassembly. -W[lLiaprmfFsoORtUuTgAckK] --dwarf[=rawline,=decodedline,=info,=abbrev,=pubnames,=aranges,=macro,=frames,=frames-interp,=str,=str-offsets,=loc,=Ranges,=pubtypes,=trace_info,=trace_abbrev,=trace_aranges,=gdb_index,=addr,=cu_index,=links,=follow-links] Displays the contents of the DWARF debug sections in the file, if any are present. Compressed debug sections are automatically decompressed (temporarily) before they are displayed. If one or more of the optional letters or words follows the switch then only those type(s) of data will be dumped. The letters and words refer to the following information: "a" "=abbrev" Displays the contents of the .debug_abbrev section. "A" "=addr" Displays the contents of the .debug_addr section. "c" "=cu_index" Displays the contents of the .debug_cu_index and/or .debug_tu_index sections. "f" "=frames" Display the raw contents of a .debug_frame section. "F" "=frames-interp" Display the interpreted contents of a .debug_frame section. "g" "=gdb_index" Displays the contents of the .gdb_index and/or .debug_names sections. "i" "=info" Displays the contents of the .debug_info section. Note: the output from this option can also be restricted by the use of the --dwarf-depth and --dwarf-start options. "k" "=links" Displays the contents of the .gnu_debuglink, .gnu_debugaltlink and .debug_sup sections, if any of them are present. Also displays any links to separate dwarf object files (dwo), if they are specified by the DW_AT_GNU_dwo_name or DW_AT_dwo_name attributes in the .debug_info section. "K" "=follow-links" Display the contents of any selected debug sections that are found in linked, separate debug info file(s). This can result in multiple versions of the same debug section being displayed if it exists in more than one file. In addition, when displaying DWARF attributes, if a form is found that references the separate debug info file, then the referenced contents will also be displayed. Note - in some distributions this option is enabled by default. It can be disabled via the N debug option. The default can be chosen when configuring the binutils via the --enable-follow-debug-links=yes or --enable-follow-debug-links=no options. If these are not used then the default is to enable the following of debug links. Note - if support for the debuginfod protocol was enabled when the binutils were built then this option will also include an attempt to contact any debuginfod servers mentioned in the DEBUGINFOD_URLS environment variable. This could take some time to resolve. This behaviour can be disabled via the =do-not-use-debuginfod debug option. "N" "=no-follow-links" Disables the following of links to separate debug info files. "D" "=use-debuginfod" Enables contacting debuginfod servers if there is a need to follow debug links. This is the default behaviour. "E" "=do-not-use-debuginfod" Disables contacting debuginfod servers when there is a need to follow debug links. "l" "=rawline" Displays the contents of the .debug_line section in a raw format. "L" "=decodedline" Displays the interpreted contents of the .debug_line section. "m" "=macro" Displays the contents of the .debug_macro and/or .debug_macinfo sections. "o" "=loc" Displays the contents of the .debug_loc and/or .debug_loclists sections. "O" "=str-offsets" Displays the contents of the .debug_str_offsets section. "p" "=pubnames" Displays the contents of the .debug_pubnames and/or .debug_gnu_pubnames sections. "r" "=aranges" Displays the contents of the .debug_aranges section. "R" "=Ranges" Displays the contents of the .debug_ranges and/or .debug_rnglists sections. "s" "=str" Displays the contents of the .debug_str, .debug_line_str and/or .debug_str_offsets sections. "t" "=pubtype" Displays the contents of the .debug_pubtypes and/or .debug_gnu_pubtypes sections. "T" "=trace_aranges" Displays the contents of the .trace_aranges section. "u" "=trace_abbrev" Displays the contents of the .trace_abbrev section. "U" "=trace_info" Displays the contents of the .trace_info section. Note: displaying the contents of .debug_static_funcs, .debug_static_vars and debug_weaknames sections is not currently supported. --dwarf-depth=n Limit the dump of the ".debug_info" section to n children. This is only useful with --debug-dump=info. The default is to print all DIEs; the special value 0 for n will also have this effect. With a non-zero value for n, DIEs at or deeper than n levels will not be printed. The range for n is zero-based. --dwarf-start=n Print only DIEs beginning with the DIE numbered n. This is only useful with --debug-dump=info. If specified, this option will suppress printing of any header information and all DIEs before the DIE numbered n. Only siblings and children of the specified DIE will be printed. This can be used in conjunction with --dwarf-depth. --dwarf-check Enable additional checks for consistency of Dwarf information. --ctf[=section] Display the contents of the specified CTF section. CTF sections themselves contain many subsections, all of which are displayed in order. By default, display the name of the section named .ctf, which is the name emitted by ld. --ctf-parent=member If the CTF section contains ambiguously-defined types, it will consist of an archive of many CTF dictionaries, all inheriting from one dictionary containing unambiguous types. This member is by default named .ctf, like the section containing it, but it is possible to change this name using the "ctf_link_set_memb_name_changer" function at link time. When looking at CTF archives that have been created by a linker that uses the name changer to rename the parent archive member, --ctf-parent can be used to specify the name used for the parent. --sframe[=section] Display the contents of the specified SFrame section. By default, display the name of the section named .sframe, which is the name emitted by ld. -G --stabs Display the full contents of any sections requested. Display the contents of the .stab and .stab.index and .stab.excl sections from an ELF file. This is only useful on systems (such as Solaris 2.0) in which ".stab" debugging symbol-table entries are carried in an ELF section. In most other file formats, debugging symbol-table entries are interleaved with linkage symbols, and are visible in the --syms output. --start-address=address Start displaying data at the specified address. This affects the output of the -d, -r and -s options. --stop-address=address Stop displaying data at the specified address. This affects the output of the -d, -r and -s options. -t --syms Print the symbol table entries of the file. This is similar to the information provided by the nm program, although the display format is different. The format of the output depends upon the format of the file being dumped, but there are two main types. One looks like this: [ 4](sec 3)(fl 0x00)(ty 0)(scl 3) (nx 1) 0x00000000 .bss [ 6](sec 1)(fl 0x00)(ty 0)(scl 2) (nx 0) 0x00000000 fred where the number inside the square brackets is the number of the entry in the symbol table, the sec number is the section number, the fl value are the symbol's flag bits, the ty number is the symbol's type, the scl number is the symbol's storage class and the nx value is the number of auxiliary entries associated with the symbol. The last two fields are the symbol's value and its name. The other common output format, usually seen with ELF based files, looks like this: 00000000 l d .bss 00000000 .bss 00000000 g .text 00000000 fred Here the first number is the symbol's value (sometimes referred to as its address). The next field is actually a set of characters and spaces indicating the flag bits that are set on the symbol. These characters are described below. Next is the section with which the symbol is associated or *ABS* if the section is absolute (ie not connected with any section), or *UND* if the section is referenced in the file being dumped, but not defined there. After the section name comes another field, a number, which for common symbols is the alignment and for other symbol is the size. Finally the symbol's name is displayed. The flag characters are divided into 7 groups as follows: "l" "g" "u" "!" The symbol is a local (l), global (g), unique global (u), neither global nor local (a space) or both global and local (!). A symbol can be neither local or global for a variety of reasons, e.g., because it is used for debugging, but it is probably an indication of a bug if it is ever both local and global. Unique global symbols are a GNU extension to the standard set of ELF symbol bindings. For such a symbol the dynamic linker will make sure that in the entire process there is just one symbol with this name and type in use. "w" The symbol is weak (w) or strong (a space). "C" The symbol denotes a constructor (C) or an ordinary symbol (a space). "W" The symbol is a warning (W) or a normal symbol (a space). A warning symbol's name is a message to be displayed if the symbol following the warning symbol is ever referenced. "I" "i" The symbol is an indirect reference to another symbol (I), a function to be evaluated during reloc processing (i) or a normal symbol (a space). "d" "D" The symbol is a debugging symbol (d) or a dynamic symbol (D) or a normal symbol (a space). "F" "f" "O" The symbol is the name of a function (F) or a file (f) or an object (O) or just a normal symbol (a space). -T --dynamic-syms Print the dynamic symbol table entries of the file. This is only meaningful for dynamic objects, such as certain types of shared libraries. This is similar to the information provided by the nm program when given the -D (--dynamic) option. The output format is similar to that produced by the --syms option, except that an extra field is inserted before the symbol's name, giving the version information associated with the symbol. If the version is the default version to be used when resolving unversioned references to the symbol then it's displayed as is, otherwise it's put into parentheses. --special-syms When displaying symbols include those which the target considers to be special in some way and which would not normally be of interest to the user. -U [d|i|l|e|x|h] --unicode=[default|invalid|locale|escape|hex|highlight] Controls the display of UTF-8 encoded multibyte characters in strings. The default (--unicode=default) is to give them no special treatment. The --unicode=locale option displays the sequence in the current locale, which may or may not support them. The options --unicode=hex and --unicode=invalid display them as hex byte sequences enclosed by either angle brackets or curly braces. The --unicode=escape option displays them as escape sequences (\uxxxx) and the --unicode=highlight option displays them as escape sequences highlighted in red (if supported by the output device). The colouring is intended to draw attention to the presence of unicode sequences where they might not be expected. -V --version Print the version number of objdump and exit. -x --all-headers Display all available header information, including the symbol table and relocation entries. Using -x is equivalent to specifying all of -a -f -h -p -r -t. -w --wide Format some lines for output devices that have more than 80 columns. Also do not truncate symbol names when they are displayed. -z --disassemble-zeroes Normally the disassembly output will skip blocks of zeroes. This option directs the disassembler to disassemble those blocks, just like any other data. @file Read command-line options from file. The options read are inserted in place of the original @file option. If file does not exist, or cannot be read, then the option will be treated literally, and not removed. Options in file are separated by whitespace. A whitespace character may be included in an option by surrounding the entire option in either single or double quotes. Any character (including a backslash) may be included by prefixing the character to be included with a backslash. The file may itself contain additional @file options; any such options will be processed recursively.
# objdump > View information about object files. More information: > https://manned.org/objdump. * Display the file header information: `objdump -f {{binary}}` * Display the disassembled output of executable sections: `objdump -d {{binary}}` * Display the disassembled executable sections in intel syntax: `objdump -M intel -d {{binary}}` * Display a complete binary hex dump of all sections: `objdump -s {{binary}}`
git-worktree
Manage multiple working trees attached to the same repository. A git repository can support multiple working trees, allowing you to check out more than one branch at a time. With git worktree add a new working tree is associated with the repository, along with additional metadata that differentiates that working tree from others in the same repository. The working tree, along with this metadata, is called a "worktree". This new worktree is called a "linked worktree" as opposed to the "main worktree" prepared by git-init(1) or git-clone(1). A repository has one main worktree (if it’s not a bare repository) and zero or more linked worktrees. When you are done with a linked worktree, remove it with git worktree remove. In its simplest form, git worktree add <path> automatically creates a new branch whose name is the final component of <path>, which is convenient if you plan to work on a new topic. For instance, git worktree add ../hotfix creates new branch hotfix and checks it out at path ../hotfix. To instead work on an existing branch in a new worktree, use git worktree add <path> <branch>. On the other hand, if you just plan to make some experimental changes or do testing without disturbing existing development, it is often convenient to create a throwaway worktree not associated with any branch. For instance, git worktree add -d <path> creates a new worktree with a detached HEAD at the same commit as the current branch. If a working tree is deleted without using git worktree remove, then its associated administrative files, which reside in the repository (see "DETAILS" below), will eventually be removed automatically (see gc.worktreePruneExpire in git-config(1)), or you can run git worktree prune in the main or any linked worktree to clean up any stale administrative files. If the working tree for a linked worktree is stored on a portable device or network share which is not always mounted, you can prevent its administrative files from being pruned by issuing the git worktree lock command, optionally specifying --reason to explain why the worktree is locked. -f, --force By default, add refuses to create a new worktree when <commit-ish> is a branch name and is already checked out by another worktree, or if <path> is already assigned to some worktree but is missing (for instance, if <path> was deleted manually). This option overrides these safeguards. To add a missing but locked worktree path, specify --force twice. move refuses to move a locked worktree unless --force is specified twice. If the destination is already assigned to some other worktree but is missing (for instance, if <new-path> was deleted manually), then --force allows the move to proceed; use --force twice if the destination is locked. remove refuses to remove an unclean worktree unless --force is used. To remove a locked worktree, specify --force twice. -b <new-branch>, -B <new-branch> With add, create a new branch named <new-branch> starting at <commit-ish>, and check out <new-branch> into the new worktree. If <commit-ish> is omitted, it defaults to HEAD. By default, -b refuses to create a new branch if it already exists. -B overrides this safeguard, resetting <new-branch> to <commit-ish>. -d, --detach With add, detach HEAD in the new worktree. See "DETACHED HEAD" in git-checkout(1). --[no-]checkout By default, add checks out <commit-ish>, however, --no-checkout can be used to suppress checkout in order to make customizations, such as configuring sparse-checkout. See "Sparse checkout" in git-read-tree(1). --[no-]guess-remote With worktree add <path>, without <commit-ish>, instead of creating a new branch from HEAD, if there exists a tracking branch in exactly one remote matching the basename of <path>, base the new branch on the remote-tracking branch, and mark the remote-tracking branch as "upstream" from the new branch. This can also be set up as the default behaviour by using the worktree.guessRemote config option. --[no-]track When creating a new branch, if <commit-ish> is a branch, mark it as "upstream" from the new branch. This is the default if <commit-ish> is a remote-tracking branch. See --track in git-branch(1) for details. --lock Keep the worktree locked after creation. This is the equivalent of git worktree lock after git worktree add, but without a race condition. -n, --dry-run With prune, do not remove anything; just report what it would remove. --orphan With add, make the new worktree and index empty, associating the worktree with a new orphan/unborn branch named <new-branch>. --porcelain With list, output in an easy-to-parse format for scripts. This format will remain stable across Git versions and regardless of user configuration. It is recommended to combine this with -z. See below for details. -z Terminate each line with a NUL rather than a newline when --porcelain is specified with list. This makes it possible to parse the output when a worktree path contains a newline character. -q, --quiet With add, suppress feedback messages. -v, --verbose With prune, report all removals. With list, output additional information about worktrees (see below). --expire <time> With prune, only expire unused worktrees older than <time>. With list, annotate missing worktrees as prunable if they are older than <time>. --reason <string> With lock or with add --lock, an explanation why the worktree is locked. <worktree> Worktrees can be identified by path, either relative or absolute. If the last path components in the worktree’s path is unique among worktrees, it can be used to identify a worktree. For example if you only have two worktrees, at /abc/def/ghi and /abc/def/ggg, then ghi or def/ghi is enough to point to the former worktree.
# git worktree > Manage multiple working trees attached to the same repository. More > information: https://git-scm.com/docs/git-worktree. * Create a new directory with the specified branch checked out into it: `git worktree add {{path/to/directory}} {{branch}}` * Create a new directory with a new branch checked out into it: `git worktree add {{path/to/directory}} -b {{new_branch}}` * List all the working directories attached to this repository: `git worktree list` * Remove a worktree (after deleting worktree directory): `git worktree prune`
tee
Copy standard input to each FILE, and also to standard output. -a, --append append to the given FILEs, do not overwrite -i, --ignore-interrupts ignore interrupt signals -p operate in a more appropriate MODE with pipes. --output-error[=MODE] set behavior on write error. See MODE below --help display this help and exit --version output version information and exit MODE determines behavior with write errors on the outputs: warn diagnose errors writing to any output warn-nopipe diagnose errors writing to any output not a pipe exit exit on error writing to any output exit-nopipe exit on error writing to any output not a pipe The default MODE for the -p option is 'warn-nopipe'. With "nopipe" MODEs, exit immediately if all outputs become broken pipes. The default operation when --output-error is not specified, is to exit immediately on error writing to a pipe, and diagnose errors writing to non pipe outputs.
# tee > Read from `stdin` and write to `stdout` and files (or commands). More > information: https://www.gnu.org/software/coreutils/tee. * Copy `stdin` to each file, and also to `stdout`: `echo "example" | tee {{path/to/file}}` * Append to the given files, do not overwrite: `echo "example" | tee -a {{path/to/file}}` * Print `stdin` to the terminal, and also pipe it into another program for further processing: `echo "example" | tee {{/dev/tty}} | {{xargs printf "[%s]"}}` * Create a directory called "example", count the number of characters in "example" and write "example" to the terminal: `echo "example" | tee >(xargs mkdir) >(wc -c)`
git-cvsexportcommit
Exports a commit from Git to a CVS checkout, making it easier to merge patches from a Git repository into a CVS repository. Specify the name of a CVS checkout using the -w switch or execute it from the root of the CVS working copy. In the latter case GIT_DIR must be defined. See examples below. It does its best to do the safe thing, it will check that the files are unchanged and up to date in the CVS checkout, and it will not autocommit by default. Supports file additions, removals, and commits that affect binary files. If the commit is a merge commit, you must tell git cvsexportcommit what parent the changeset should be done against. -c Commit automatically if the patch applied cleanly. It will not commit if any hunks fail to apply or there were other problems. -p Be pedantic (paranoid) when applying patches. Invokes patch with --fuzz=0 -a Add authorship information. Adds Author line, and Committer (if different from Author) to the message. -d Set an alternative CVSROOT to use. This corresponds to the CVS -d parameter. Usually users will not want to set this, except if using CVS in an asymmetric fashion. -f Force the merge even if the files are not up to date. -P Force the parent commit, even if it is not a direct parent. -m Prepend the commit message with the provided prefix. Useful for patch series and the like. -u Update affected files from CVS repository before attempting export. -k Reverse CVS keyword expansion (e.g. $Revision: 1.2.3.4$ becomes $Revision$) in working CVS checkout before applying patch. -w Specify the location of the CVS checkout to use for the export. This option does not require GIT_DIR to be set before execution if the current directory is within a Git repository. The default is the value of cvsexportcommit.cvsdir. -W Tell cvsexportcommit that the current working directory is not only a Git checkout, but also the CVS checkout. Therefore, Git will reset the working directory to the parent commit before proceeding. -v Verbose.
# git cvsexportcommit > Export a single `Git` commit to a CVS checkout. More information: > https://git-scm.com/docs/git-cvsexportcommit. * Merge a specific patch into CVS: `git cvsexportcommit -v -c -w {{path/to/project_cvs_checkout}} {{commit_sha1}}`
sdiff
Side-by-side merge of differences between FILE1 and FILE2. Mandatory arguments to long options are mandatory for short options too. -o, --output=FILE operate interactively, sending output to FILE -i, --ignore-case consider upper- and lower-case to be the same -E, --ignore-tab-expansion ignore changes due to tab expansion -Z, --ignore-trailing-space ignore white space at line end -b, --ignore-space-change ignore changes in the amount of white space -W, --ignore-all-space ignore all white space -B, --ignore-blank-lines ignore changes whose lines are all blank -I, --ignore-matching-lines=RE ignore changes all whose lines match RE --strip-trailing-cr strip trailing carriage return on input -a, --text treat all files as text -w, --width=NUM output at most NUM (default 130) print columns -l, --left-column output only the left column of common lines -s, --suppress-common-lines do not output common lines -t, --expand-tabs expand tabs to spaces in output --tabsize=NUM tab stops at every NUM (default 8) print columns -d, --minimal try hard to find a smaller set of changes -H, --speed-large-files assume large files, many scattered small changes --diff-program=PROGRAM use PROGRAM to compare files --help display this help and exit -v, --version output version information and exit If a FILE is '-', read standard input. Exit status is 0 if inputs are the same, 1 if different, 2 if trouble.
# sdiff > Compare the differences between and optionally merge 2 files. More > information: https://manned.org/sdiff. * Compare 2 files: `sdiff {{path/to/file1}} {{path/to/file2}}` * Compare 2 files, ignoring all tabs and whitespace: `sdiff -W {{path/to/file1}} {{path/to/file2}}` * Compare 2 files, ignoring whitespace at the end of lines: `sdiff -Z {{path/to/file1}} {{path/to/file2}}` * Compare 2 files in a case-insensitive manner: `sdiff -i {{path/to/file1}} {{path/to/file2}}` * Compare and then merge, writing the output to a new file: `sdiff -o {{path/to/merged_file}} {{path/to/file1}} {{path/to/file2}}`
dir
List information about the FILEs (the current directory by default). Sort entries alphabetically if none of -cftuvSUX nor --sort is specified. Mandatory arguments to long options are mandatory for short options too. -a, --all do not ignore entries starting with . -A, --almost-all do not list implied . and .. --author with -l, print the author of each file -b, --escape print C-style escapes for nongraphic characters --block-size=SIZE with -l, scale sizes by SIZE when printing them; e.g., '--block-size=M'; see SIZE format below -B, --ignore-backups do not list implied entries ending with ~ -c with -lt: sort by, and show, ctime (time of last change of file status information); with -l: show ctime and sort by name; otherwise: sort by ctime, newest first -C list entries by columns --color[=WHEN] color the output WHEN; more info below -d, --directory list directories themselves, not their contents -D, --dired generate output designed for Emacs' dired mode -f list all entries in directory order -F, --classify[=WHEN] append indicator (one of */=>@|) to entries WHEN --file-type likewise, except do not append '*' --format=WORD across -x, commas -m, horizontal -x, long -l, single-column -1, verbose -l, vertical -C --full-time like -l --time-style=full-iso -g like -l, but do not list owner --group-directories-first group directories before files; can be augmented with a --sort option, but any use of --sort=none (-U) disables grouping -G, --no-group in a long listing, don't print group names -h, --human-readable with -l and -s, print sizes like 1K 234M 2G etc. --si likewise, but use powers of 1000 not 1024 -H, --dereference-command-line follow symbolic links listed on the command line --dereference-command-line-symlink-to-dir follow each command line symbolic link that points to a directory --hide=PATTERN do not list implied entries matching shell PATTERN (overridden by -a or -A) --hyperlink[=WHEN] hyperlink file names WHEN --indicator-style=WORD append indicator with style WORD to entry names: none (default), slash (-p), file-type (--file-type), classify (-F) -i, --inode print the index number of each file -I, --ignore=PATTERN do not list implied entries matching shell PATTERN -k, --kibibytes default to 1024-byte blocks for file system usage; used only with -s and per directory totals -l use a long listing format -L, --dereference when showing file information for a symbolic link, show information for the file the link references rather than for the link itself -m fill width with a comma separated list of entries -n, --numeric-uid-gid like -l, but list numeric user and group IDs -N, --literal print entry names without quoting -o like -l, but do not list group information -p, --indicator-style=slash append / indicator to directories -q, --hide-control-chars print ? instead of nongraphic characters --show-control-chars show nongraphic characters as-is (the default, unless program is 'ls' and output is a terminal) -Q, --quote-name enclose entry names in double quotes --quoting-style=WORD use quoting style WORD for entry names: literal, locale, shell, shell-always, shell-escape, shell-escape-always, c, escape (overrides QUOTING_STYLE environment variable) -r, --reverse reverse order while sorting -R, --recursive list subdirectories recursively -s, --size print the allocated size of each file, in blocks -S sort by file size, largest first --sort=WORD sort by WORD instead of name: none (-U), size (-S), time (-t), version (-v), extension (-X), width --time=WORD select which timestamp used to display or sort; access time (-u): atime, access, use; metadata change time (-c): ctime, status; modified time (default): mtime, modification; birth time: birth, creation; with -l, WORD determines which time to show; with --sort=time, sort by WORD (newest first) --time-style=TIME_STYLE time/date format with -l; see TIME_STYLE below -t sort by time, newest first; see --time -T, --tabsize=COLS assume tab stops at each COLS instead of 8 -u with -lt: sort by, and show, access time; with -l: show access time and sort by name; otherwise: sort by access time, newest first -U do not sort; list entries in directory order -v natural sort of (version) numbers within text -w, --width=COLS set output width to COLS. 0 means no limit -x list entries by lines instead of by columns -X sort alphabetically by entry extension -Z, --context print any security context of each file --zero end each output line with NUL, not newline -1 list one file per line --help display this help and exit --version output version information and exit The SIZE argument is an integer and optional unit (example: 10K is 10*1024). Units are K,M,G,T,P,E,Z,Y,R,Q (powers of 1024) or KB,MB,... (powers of 1000). Binary prefixes can be used, too: KiB=K, MiB=M, and so on. The TIME_STYLE argument can be full-iso, long-iso, iso, locale, or +FORMAT. FORMAT is interpreted like in date(1). If FORMAT is FORMAT1<newline>FORMAT2, then FORMAT1 applies to non-recent files and FORMAT2 to recent files. TIME_STYLE prefixed with 'posix-' takes effect only outside the POSIX locale. Also the TIME_STYLE environment variable sets the default style to use. The WHEN argument defaults to 'always' and can also be 'auto' or 'never'. Using color to distinguish file types is disabled both by default and with --color=never. With --color=auto, ls emits color codes only when standard output is connected to a terminal. The LS_COLORS environment variable can change the settings. Use the dircolors(1) command to set it. Exit status: 0 if OK, 1 if minor problems (e.g., cannot access subdirectory), 2 if serious trouble (e.g., cannot access command-line argument).
# dir > List directory contents using one line per file, special characters are > represented by backslash escape sequences. Works as `ls -C --escape`. More > information: https://manned.org/dir. * List all files, including hidden files: `dir -all` * List files including their author (`-l` is required): `dir -l --author` * List files excluding those that match a specified blob pattern: `dir --hide={{pattern}}` * List subdirectories recursively: `dir --recursive` * Display help: `dir --help`
cd
The cd utility shall change the working directory of the current shell execution environment (see Section 2.12, Shell Execution Environment) by executing the following steps in sequence. (In the following steps, the symbol curpath represents an intermediate value used to simplify the description of the algorithm used by cd. There is no requirement that curpath be made visible to the application.) 1. If no directory operand is given and the HOME environment variable is empty or undefined, the default behavior is implementation-defined and no further steps shall be taken. 2. If no directory operand is given and the HOME environment variable is set to a non-empty value, the cd utility shall behave as if the directory named in the HOME environment variable was specified as the directory operand. 3. If the directory operand begins with a <slash> character, set curpath to the operand and proceed to step 7. 4. If the first component of the directory operand is dot or dot-dot, proceed to step 6. 5. Starting with the first pathname in the <colon>-separated pathnames of CDPATH (see the ENVIRONMENT VARIABLES section) if the pathname is non-null, test if the concatenation of that pathname, a <slash> character if that pathname did not end with a <slash> character, and the directory operand names a directory. If the pathname is null, test if the concatenation of dot, a <slash> character, and the operand names a directory. In either case, if the resulting string names an existing directory, set curpath to that string and proceed to step 7. Otherwise, repeat this step with the next pathname in CDPATH until all pathnames have been tested. 6. Set curpath to the directory operand. 7. If the -P option is in effect, proceed to step 10. If curpath does not begin with a <slash> character, set curpath to the string formed by the concatenation of the value of PWD, a <slash> character if the value of PWD did not end with a <slash> character, and curpath. 8. The curpath value shall then be converted to canonical form as follows, considering each component from beginning to end, in sequence: a. Dot components and any <slash> characters that separate them from the next component shall be deleted. b. For each dot-dot component, if there is a preceding component and it is neither root nor dot-dot, then: i. If the preceding component does not refer (in the context of pathname resolution with symbolic links followed) to a directory, then the cd utility shall display an appropriate error message and no further steps shall be taken. ii. The preceding component, all <slash> characters separating the preceding component from dot-dot, dot-dot, and all <slash> characters separating dot- dot from the following component (if any) shall be deleted. c. An implementation may further simplify curpath by removing any trailing <slash> characters that are not also leading <slash> characters, replacing multiple non- leading consecutive <slash> characters with a single <slash>, and replacing three or more leading <slash> characters with a single <slash>. If, as a result of this canonicalization, the curpath variable is null, no further steps shall be taken. 9. If curpath is longer than {PATH_MAX} bytes (including the terminating null) and the directory operand was not longer than {PATH_MAX} bytes (including the terminating null), then curpath shall be converted from an absolute pathname to an equivalent relative pathname if possible. This conversion shall always be considered possible if the value of PWD, with a trailing <slash> added if it does not already have one, is an initial substring of curpath. Whether or not it is considered possible under other circumstances is unspecified. Implementations may also apply this conversion if curpath is not longer than {PATH_MAX} bytes or the directory operand was longer than {PATH_MAX} bytes. 10. The cd utility shall then perform actions equivalent to the chdir() function called with curpath as the path argument. If these actions fail for any reason, the cd utility shall display an appropriate error message and the remainder of this step shall not be executed. If the -P option is not in effect, the PWD environment variable shall be set to the value that curpath had on entry to step 9 (i.e., before conversion to a relative pathname). If the -P option is in effect, the PWD environment variable shall be set to the string that would be output by pwd -P. If there is insufficient permission on the new directory, or on any parent of that directory, to determine the current working directory, the value of the PWD environment variable is unspecified. If, during the execution of the above steps, the PWD environment variable is set, the OLDPWD environment variable shall also be set to the value of the old working directory (that is the current working directory immediately prior to the call to cd). The cd utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following options shall be supported by the implementation: -L Handle the operand dot-dot logically; symbolic link components shall not be resolved before dot-dot components are processed (see steps 8. and 9. in the DESCRIPTION). -P Handle the operand dot-dot physically; symbolic link components shall be resolved before dot-dot components are processed (see step 7. in the DESCRIPTION). If both -L and -P options are specified, the last of these options shall be used and all others ignored. If neither -L nor -P is specified, the operand shall be handled dot-dot logically; see the DESCRIPTION.
# cd > Change the current working directory. More information: > https://manned.org/cd. * Go to the specified directory: `cd {{path/to/directory}}` * Go up to the parent of the current directory: `cd ..` * Go to the home directory of the current user: `cd` * Go to the home directory of the specified user: `cd ~{{username}}` * Go to the previously chosen directory: `cd -` * Go to the root directory: `cd /`
git-revert
Given one or more existing commits, revert the changes that the related patches introduce, and record some new commits that record them. This requires your working tree to be clean (no modifications from the HEAD commit). Note: git revert is used to record some new commits to reverse the effect of some earlier commits (often only a faulty one). If you want to throw away all uncommitted changes in your working directory, you should see git-reset(1), particularly the --hard option. If you want to extract specific files as they were in another commit, you should see git-restore(1), specifically the --source option. Take care with these alternatives as both will discard uncommitted changes in your working directory. See "Reset, restore and revert" in git(1) for the differences between the three commands. <commit>... Commits to revert. For a more complete list of ways to spell commit names, see gitrevisions(7). Sets of commits can also be given but no traversal is done by default, see git-rev-list(1) and its --no-walk option. -e, --edit With this option, git revert will let you edit the commit message prior to committing the revert. This is the default if you run the command from a terminal. -m parent-number, --mainline parent-number Usually you cannot revert a merge because you do not know which side of the merge should be considered the mainline. This option specifies the parent number (starting from 1) of the mainline and allows revert to reverse the change relative to the specified parent. Reverting a merge commit declares that you will never want the tree changes brought in by the merge. As a result, later merges will only bring in tree changes introduced by commits that are not ancestors of the previously reverted merge. This may or may not be what you want. See the revert-a-faulty-merge How-To[1] for more details. --no-edit With this option, git revert will not start the commit message editor. --cleanup=<mode> This option determines how the commit message will be cleaned up before being passed on to the commit machinery. See git-commit(1) for more details. In particular, if the <mode> is given a value of scissors, scissors will be appended to MERGE_MSG before being passed on in the case of a conflict. -n, --no-commit Usually the command automatically creates some commits with commit log messages stating which commits were reverted. This flag applies the changes necessary to revert the named commits to your working tree and the index, but does not make the commits. In addition, when this option is used, your index does not have to match the HEAD commit. The revert is done against the beginning state of your index. This is useful when reverting more than one commits' effect to your index in a row. -S[<keyid>], --gpg-sign[=<keyid>], --no-gpg-sign GPG-sign commits. The keyid argument is optional and defaults to the committer identity; if specified, it must be stuck to the option without a space. --no-gpg-sign is useful to countermand both commit.gpgSign configuration variable, and earlier --gpg-sign. -s, --signoff Add a Signed-off-by trailer at the end of the commit message. See the signoff option in git-commit(1) for more information. --strategy=<strategy> Use the given merge strategy. Should only be used once. See the MERGE STRATEGIES section in git-merge(1) for details. -X<option>, --strategy-option=<option> Pass the merge strategy-specific option through to the merge strategy. See git-merge(1) for details. --rerere-autoupdate, --no-rerere-autoupdate After the rerere mechanism reuses a recorded resolution on the current conflict to update the files in the working tree, allow it to also update the index with the result of resolution. --no-rerere-autoupdate is a good way to double-check what rerere did and catch potential mismerges, before committing the result to the index with a separate git add. --reference Instead of starting the body of the log message with "This reverts <full object name of the commit being reverted>.", refer to the commit using "--pretty=reference" format (cf. git-log(1)). The revert.reference configuration variable can be used to enable this option by default.
# git revert > Create new commits which reverse the effect of earlier ones. More > information: https://git-scm.com/docs/git-revert. * Revert the most recent commit: `git revert {{HEAD}}` * Revert the 5th last commit: `git revert HEAD~{{4}}` * Revert a specific commit: `git revert {{0c01a9}}` * Revert multiple commits: `git revert {{branch_name~5..branch_name~2}}` * Don't create new commits, just change the working tree: `git revert -n {{0c01a9..9a1743}}`
pathchk
Diagnose invalid or unportable file names. -p check for most POSIX systems -P check for empty names and leading "-" --portability check for all POSIX systems (equivalent to -p -P) --help display this help and exit --version output version information and exit
# pathchk > Check the validity and portability of one or more pathnames. More > information: https://www.gnu.org/software/coreutils/pathchk. * Check pathnames for validity in the current system: `pathchk {{path1 path2 …}}` * Check pathnames for validity on a wider range of POSIX compliant systems: `pathchk -p {{path1 path2 …}}` * Check pathnames for validity on all POSIX compliant systems: `pathchk --portability {{path1 path2 …}}` * Only check for empty pathnames or leading dashes (-): `pathchk -P {{path1 path2 …}}`
man
man is the system's manual pager. Each page argument given to man is normally the name of a program, utility or function. The manual page associated with each of these arguments is then found and displayed. A section, if provided, will direct man to look only in that section of the manual. The default action is to search in all of the available sections following a pre-defined order (see DEFAULTS), and to show only the first page found, even if page exists in several sections. The table below shows the section numbers of the manual followed by the types of pages they contain. 1 Executable programs or shell commands 2 System calls (functions provided by the kernel) 3 Library calls (functions within program libraries) 4 Special files (usually found in /dev) 5 File formats and conventions, e.g. /etc/passwd 6 Games 7 Miscellaneous (including macro packages and conventions), e.g. man(7), groff(7), man-pages(7) 8 System administration commands (usually only for root) 9 Kernel routines [Non standard] A manual page consists of several sections. Conventional section names include NAME, SYNOPSIS, CONFIGURATION, DESCRIPTION, OPTIONS, EXIT STATUS, RETURN VALUE, ERRORS, ENVIRONMENT, FILES, VERSIONS, CONFORMING TO, NOTES, BUGS, EXAMPLE, AUTHORS, and SEE ALSO. The following conventions apply to the SYNOPSIS section and can be used as a guide in other sections. bold text type exactly as shown. italic text replace with appropriate argument. [-abc] any or all arguments within [ ] are optional. -a|-b options delimited by | cannot be used together. argument ... argument is repeatable. [expression] ... entire expression within [ ] is repeatable. Exact rendering may vary depending on the output device. For instance, man will usually not be able to render italics when running in a terminal, and will typically use underlined or coloured text instead. The command or function illustration is a pattern that should match all possible invocations. In some cases it is advisable to illustrate several exclusive invocations as is shown in the SYNOPSIS section of this manual page. Non-argument options that are duplicated either on the command line, in $MANOPT, or both, are not harmful. For options that require an argument, each duplication will override the previous argument value. General options -C file, --config-file=file Use this user configuration file rather than the default of ~/.manpath. -d, --debug Print debugging information. -D, --default This option is normally issued as the very first option and resets man's behaviour to its default. Its use is to reset those options that may have been set in $MANOPT. Any options that follow -D will have their usual effect. --warnings[=warnings] Enable warnings from groff. This may be used to perform sanity checks on the source text of manual pages. warnings is a comma-separated list of warning names; if it is not supplied, the default is "mac". See the “Warnings” node in info groff for a list of available warning names. Main modes of operation -f, --whatis Equivalent to whatis. Display a short description from the manual page, if available. See whatis(1) for details. -k, --apropos Equivalent to apropos. Search the short manual page descriptions for keywords and display any matches. See apropos(1) for details. -K, --global-apropos Search for text in all manual pages. This is a brute- force search, and is likely to take some time; if you can, you should specify a section to reduce the number of pages that need to be searched. Search terms may be simple strings (the default), or regular expressions if the --regex option is used. Note that this searches the sources of the manual pages, not the rendered text, and so may include false positives due to things like comments in source files. Searching the rendered text would be much slower. -l, --local-file Activate "local" mode. Format and display local manual files instead of searching through the system's manual collection. Each manual page argument will be interpreted as an nroff source file in the correct format. No cat file is produced. If '-' is listed as one of the arguments, input will be taken from stdin. When this option is not used, and man fails to find the page required, before displaying the error message, it attempts to act as if this option was supplied, using the name as a filename and looking for an exact match. -w, --where, --path, --location Don't actually display the manual page, but do print the location of the source nroff file that would be formatted. If the -a option is also used, then print the locations of all source files that match the search criteria. -W, --where-cat, --location-cat Don't actually display the manual page, but do print the location of the preformatted cat file that would be displayed. If the -a option is also used, then print the locations of all preformatted cat files that match the search criteria. If -w and -W are both used, then print both source file and cat file separated by a space. If all of -w, -W, and -a are used, then do this for each possible match. -c, --catman This option is not for general use and should only be used by the catman program. -R encoding, --recode=encoding Instead of formatting the manual page in the usual way, output its source converted to the specified encoding. If you already know the encoding of the source file, you can also use manconv(1) directly. However, this option allows you to convert several manual pages to a single encoding without having to explicitly state the encoding of each, provided that they were already installed in a structure similar to a manual page hierarchy. Consider using man-recode(1) instead for converting multiple manual pages, since it has an interface designed for bulk conversion and so can be much faster. Finding manual pages -L locale, --locale=locale man will normally determine your current locale by a call to the C function setlocale(3) which interrogates various environment variables, possibly including $LC_MESSAGES and $LANG. To temporarily override the determined value, use this option to supply a locale string directly to man. Note that it will not take effect until the search for pages actually begins. Output such as the help message will always be displayed in the initially determined locale. -m system[,...], --systems=system[,...] If this system has access to other operating systems' manual pages, they can be accessed using this option. To search for a manual page from NewOS's manual page collection, use the option -m NewOS. The system specified can be a combination of comma delimited operating system names. To include a search of the native operating system's manual pages, include the system name man in the argument string. This option will override the $SYSTEM environment variable. -M path, --manpath=path Specify an alternate manpath to use. By default, man uses manpath derived code to determine the path to search. This option overrides the $MANPATH environment variable and causes option -m to be ignored. A path specified as a manpath must be the root of a manual page hierarchy structured into sections as described in the man-db manual (under "The manual page system"). To view manual pages outside such hierarchies, see the -l option. -S list, -s list, --sections=list The given list is a colon- or comma-separated list of sections, used to determine which manual sections to search and in what order. This option overrides the $MANSECT environment variable. (The -s spelling is for compatibility with System V.) -e sub-extension, --extension=sub-extension Some systems incorporate large packages of manual pages, such as those that accompany the Tcl package, into the main manual page hierarchy. To get around the problem of having two manual pages with the same name such as exit(3), the Tcl pages were usually all assigned to section l. As this is unfortunate, it is now possible to put the pages in the correct section, and to assign a specific "extension" to them, in this case, exit(3tcl). Under normal operation, man will display exit(3) in preference to exit(3tcl). To negotiate this situation and to avoid having to know which section the page you require resides in, it is now possible to give man a sub-extension string indicating which package the page must belong to. Using the above example, supplying the option -e tcl to man will restrict the search to pages having an extension of *tcl. -i, --ignore-case Ignore case when searching for manual pages. This is the default. -I, --match-case Search for manual pages case-sensitively. --regex Show all pages with any part of either their names or their descriptions matching each page argument as a regular expression, as with apropos(1). Since there is usually no reasonable way to pick a "best" page when searching for a regular expression, this option implies -a. --wildcard Show all pages with any part of either their names or their descriptions matching each page argument using shell-style wildcards, as with apropos(1) --wildcard. The page argument must match the entire name or description, or match on word boundaries in the description. Since there is usually no reasonable way to pick a "best" page when searching for a wildcard, this option implies -a. --names-only If the --regex or --wildcard option is used, match only page names, not page descriptions, as with whatis(1). Otherwise, no effect. -a, --all By default, man will exit after displaying the most suitable manual page it finds. Using this option forces man to display all the manual pages with names that match the search criteria. -u, --update This option causes man to update its database caches of installed manual pages. This is only needed in rare situations, and it is normally better to run mandb(8) instead. --no-subpages By default, man will try to interpret pairs of manual page names given on the command line as equivalent to a single manual page name containing a hyphen or an underscore. This supports the common pattern of programs that implement a number of subcommands, allowing them to provide manual pages for each that can be accessed using similar syntax as would be used to invoke the subcommands themselves. For example: $ man -aw git diff /usr/share/man/man1/git-diff.1.gz To disable this behaviour, use the --no-subpages option. $ man -aw --no-subpages git diff /usr/share/man/man1/git.1.gz /usr/share/man/man3/Git.3pm.gz /usr/share/man/man1/diff.1.gz Controlling formatted output -P pager, --pager=pager Specify which output pager to use. By default, man uses less, falling back to cat if less is not found or is not executable. This option overrides the $MANPAGER environment variable, which in turn overrides the $PAGER environment variable. It is not used in conjunction with -f or -k. The value may be a simple command name or a command with arguments, and may use shell quoting (backslashes, single quotes, or double quotes). It may not use pipes to connect multiple commands; if you need that, use a wrapper script, which may take the file to display either as an argument or on standard input. -r prompt, --prompt=prompt If a recent version of less is used as the pager, man will attempt to set its prompt and some sensible options. The default prompt looks like Manual page name(sec) line x where name denotes the manual page name, sec denotes the section it was found under and x the current line number. This is achieved by using the $LESS environment variable. Supplying -r with a string will override this default. The string may contain the text $MAN_PN which will be expanded to the name of the current manual page and its section name surrounded by "(" and ")". The string used to produce the default could be expressed as \ Manual\ page\ \$MAN_PN\ ?ltline\ %lt?L/%L.: byte\ %bB?s/%s..?\ (END):?pB\ %pB\\%.. (press h for help or q to quit) It is broken into three lines here for the sake of readability only. For its meaning see the less(1) manual page. The prompt string is first evaluated by the shell. All double quotes, back-quotes and backslashes in the prompt must be escaped by a preceding backslash. The prompt string may end in an escaped $ which may be followed by further options for less. By default man sets the -ix8 options. The $MANLESS environment variable described below may be used to set a default prompt string if none is supplied on the command line. -7, --ascii When viewing a pure ascii(7) manual page on a 7 bit terminal or terminal emulator, some characters may not display correctly when using the latin1(7) device description with GNU nroff. This option allows pure ascii manual pages to be displayed in ascii with the latin1 device. It will not translate any latin1 text. The following table shows the translations performed: some parts of it may only be displayed properly when using GNU nroff's latin1(7) device. Description Octal latin1 ascii ──────────────────────────────────────── continuation 255 ‐ - hyphen bullet (middle 267 • o dot) acute accent 264 ´ ' multiplication 327 × x sign If the latin1 column displays correctly, your terminal may be set up for latin1 characters and this option is not necessary. If the latin1 and ascii columns are identical, you are reading this page using this option or man did not format this page using the latin1 device description. If the latin1 column is missing or corrupt, you may need to view manual pages with this option. This option is ignored when using options -t, -H, -T, or -Z and may be useless for nroff other than GNU's. -E encoding, --encoding=encoding Generate output for a character encoding other than the default. For backward compatibility, encoding may be an nroff device such as ascii, latin1, or utf8 as well as a true character encoding such as UTF-8. --no-hyphenation, --nh Normally, nroff will automatically hyphenate text at line breaks even in words that do not contain hyphens, if it is necessary to do so to lay out words on a line without excessive spacing. This option disables automatic hyphenation, so words will only be hyphenated if they already contain hyphens. If you are writing a manual page and simply want to prevent nroff from hyphenating a word at an inappropriate point, do not use this option, but consult the nroff documentation instead; for instance, you can put "\%" inside a word to indicate that it may be hyphenated at that point, or put "\%" at the start of a word to prevent it from being hyphenated. --no-justification, --nj Normally, nroff will automatically justify text to both margins. This option disables full justification, leaving justified only to the left margin, sometimes called "ragged-right" text. If you are writing a manual page and simply want to prevent nroff from justifying certain paragraphs, do not use this option, but consult the nroff documentation instead; for instance, you can use the ".na", ".nf", ".fi", and ".ad" requests to temporarily disable adjusting and filling. -p string, --preprocessor=string Specify the sequence of preprocessors to run before nroff or troff/groff. Not all installations will have a full set of preprocessors. Some of the preprocessors and the letters used to designate them are: eqn (e), grap (g), pic (p), tbl (t), vgrind (v), refer (r). This option overrides the $MANROFFSEQ environment variable. zsoelim is always run as the very first preprocessor. -t, --troff Use groff -mandoc to format the manual page to stdout. This option is not required in conjunction with -H, -T, or -Z. -T[device], --troff-device[=device] This option is used to change groff (or possibly troff's) output to be suitable for a device other than the default. It implies -t. Examples (provided with Groff-1.17) include dvi, latin1, ps, utf8, X75 and X100. -H[browser], --html[=browser] This option will cause groff to produce HTML output, and will display that output in a web browser. The choice of browser is determined by the optional browser argument if one is provided, by the $BROWSER environment variable, or by a compile-time default if that is unset (usually lynx). This option implies -t, and will only work with GNU troff. -X[dpi], --gxditview[=dpi] This option displays the output of groff in a graphical window using the gxditview program. The dpi (dots per inch) may be 75, 75-12, 100, or 100-12, defaulting to 75; the -12 variants use a 12-point base font. This option implies -T with the X75, X75-12, X100, or X100-12 device respectively. -Z, --ditroff groff will run troff and then use an appropriate post- processor to produce output suitable for the chosen device. If groff -mandoc is groff, this option is passed to groff and will suppress the use of a post-processor. It implies -t. Getting help -?, --help Print a help message and exit. --usage Print a short usage message and exit. -V, --version Display version information.
# man > Format and display manual pages. More information: > https://www.man7.org/linux/man-pages/man1/man.1.html. * Display the man page for a command: `man {{command}}` * Display the man page for a command from section 7: `man {{7}} {{command}}` * List all available sections for a command: `man -f {{command}}` * Display the path searched for manpages: `man --path` * Display the location of a manpage rather than the manpage itself: `man -w {{command}}` * Display the man page using a specific locale: `man {{command}} --locale={{locale}}` * Search for manpages containing a search string: `man -k "{{search_string}}"`
ps
The ps utility shall write information about processes, subject to having appropriate privileges to obtain information about those processes. By default, ps shall select all processes with the same effective user ID as the current user and the same controlling terminal as the invoker. The ps utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following options shall be supported: -a Write information for all processes associated with terminals. Implementations may omit session leaders from this list. -A Write information for all processes. -d Write information for all processes, except session leaders. -e Write information for all processes. (Equivalent to -A.) -f Generate a full listing. (See the STDOUT section for the contents of a full listing.) -g grouplist Write information for processes whose session leaders are given in grouplist. The application shall ensure that the grouplist is a single argument in the form of a <blank> or <comma>-separated list. -G grouplist Write information for processes whose real group ID numbers are given in grouplist. The application shall ensure that the grouplist is a single argument in the form of a <blank> or <comma>-separated list. -l Generate a long listing. (See STDOUT for the contents of a long listing.) -n namelist Specify the name of an alternative system namelist file in place of the default. The name of the default file and the format of a namelist file are unspecified. -o format Write information according to the format specification given in format. This is fully described in the STDOUT section. Multiple -o options can be specified; the format specification shall be interpreted as the <space>-separated concatenation of all the format option-arguments. -p proclist Write information for processes whose process ID numbers are given in proclist. The application shall ensure that the proclist is a single argument in the form of a <blank> or <comma>-separated list. -t termlist Write information for processes associated with terminals given in termlist. The application shall ensure that the termlist is a single argument in the form of a <blank> or <comma>-separated list. Terminal identifiers shall be given in an implementation-defined format. On XSI-conformant systems, they shall be given in one of two forms: the device's filename (for example, tty04) or, if the device's filename starts with tty, just the identifier following the characters tty (for example, "04"). -u userlist Write information for processes whose user ID numbers or login names are given in userlist. The application shall ensure that the userlist is a single argument in the form of a <blank> or <comma>-separated list. In the listing, the numerical user ID shall be written unless the -f option is used, in which case the login name shall be written. -U userlist Write information for processes whose real user ID numbers or login names are given in userlist. The application shall ensure that the userlist is a single argument in the form of a <blank> or <comma>-separated list. With the exception of -f, -l, -n namelist, and -o format, all of the options shown are used to select processes. If any are specified, the default list shall be ignored and ps shall select the processes represented by the inclusive OR of all the selection-criteria options.
# ps > Information about running processes. More information: > https://www.unix.com/man-page/osx/1/ps/. * List all running processes: `ps aux` * List all running processes including the full command string: `ps auxww` * Search for a process that matches a string: `ps aux | grep {{string}}` * Get the parent PID of a process: `ps -o ppid= -p {{pid}}` * Sort processes by memory usage: `ps -m` * Sort processes by CPU usage: `ps -r`
git-ls-tree
Lists the contents of a given tree object, like what "/bin/ls -a" does in the current working directory. Note that: • the behaviour is slightly different from that of "/bin/ls" in that the <path> denotes just a list of patterns to match, e.g. so specifying directory name (without -r) will behave differently, and order of the arguments does not matter. • the behaviour is similar to that of "/bin/ls" in that the <path> is taken as relative to the current working directory. E.g. when you are in a directory sub that has a directory dir, you can run git ls-tree -r HEAD dir to list the contents of the tree (that is sub/dir in HEAD). You don’t want to give a tree that is not at the root level (e.g. git ls-tree -r HEAD:sub dir) in this case, as that would result in asking for sub/sub/dir in the HEAD commit. However, the current working directory can be ignored by passing --full-tree option. <tree-ish> Id of a tree-ish. -d Show only the named tree entry itself, not its children. -r Recurse into sub-trees. -t Show tree entries even when going to recurse them. Has no effect if -r was not passed. -d implies -t. -l, --long Show object size of blob (file) entries. -z \0 line termination on output and do not quote filenames. See OUTPUT FORMAT below for more information. --name-only, --name-status List only filenames (instead of the "long" output), one per line. Cannot be combined with --object-only. --object-only List only names of the objects, one per line. Cannot be combined with --name-only or --name-status. This is equivalent to specifying --format='%(objectname)', but for both this option and that exact format the command takes a hand-optimized codepath instead of going through the generic formatting mechanism. --abbrev[=<n>] Instead of showing the full 40-byte hexadecimal object lines, show the shortest prefix that is at least <n> hexdigits long that uniquely refers the object. Non default number of digits can be specified with --abbrev=<n>. --full-name Instead of showing the path names relative to the current working directory, show the full path names. --full-tree Do not limit the listing to the current working directory. Implies --full-name. --format=<format> A string that interpolates %(fieldname) from the result being shown. It also interpolates %% to %, and %xNN where NN are hex digits interpolates to character with hex code NN; for example %x00 interpolates to \0 (NUL), %x09 to \t (TAB) and %x0a to \n (LF). When specified, --format cannot be combined with other format-altering options, including --long, --name-only and --object-only. [<path>...] When paths are given, show them (note that this isn’t really raw pathnames, but rather a list of patterns to match). Otherwise implicitly uses the root level of the tree as the sole path argument.
# git ls-tree > List the contents of a tree object. More information: https://git- > scm.com/docs/git-ls-tree. * List the contents of the tree on a branch: `git ls-tree {{branch_name}}` * List the contents of the tree on a commit, recursing into subtrees: `git ls-tree -r {{commit_hash}}` * List only the filenames of the tree on a commit: `git ls-tree --name-only {{commit_hash}}`
ssh
ssh (SSH client) is a program for logging into a remote machine and for executing commands on a remote machine. It is intended to provide secure encrypted communications between two untrusted hosts over an insecure network. X11 connections, arbitrary TCP ports and UNIX-domain sockets can also be forwarded over the secure channel. ssh connects and logs into the specified destination, which may be specified as either [user@]hostname or a URI of the form ssh://[user@]hostname[:port]. The user must prove their identity to the remote machine using one of several methods (see below). If a command is specified, it will be executed on the remote host instead of a login shell. A complete command line may be specified as command, or it may have additional arguments. If supplied, the arguments will be appended to the command, separated by spaces, before it is sent to the server to be executed. The options are as follows: -4 Forces ssh to use IPv4 addresses only. -6 Forces ssh to use IPv6 addresses only. -A Enables forwarding of connections from an authentication agent such as ssh-agent(1). This can also be specified on a per-host basis in a configuration file. Agent forwarding should be enabled with caution. Users with the ability to bypass file permissions on the remote host (for the agent's UNIX-domain socket) can access the local agent through the forwarded connection. An attacker cannot obtain key material from the agent, however they can perform operations on the keys that enable them to authenticate using the identities loaded into the agent. A safer alternative may be to use a jump host (see -J). -a Disables forwarding of the authentication agent connection. -B bind_interface Bind to the address of bind_interface before attempting to connect to the destination host. This is only useful on systems with more than one address. -b bind_address Use bind_address on the local machine as the source address of the connection. Only useful on systems with more than one address. -C Requests compression of all data (including stdin, stdout, stderr, and data for forwarded X11, TCP and UNIX-domain connections). The compression algorithm is the same used by gzip(1). Compression is desirable on modem lines and other slow connections, but will only slow down things on fast networks. The default value can be set on a host-by- host basis in the configuration files; see the Compression option in ssh_config(5). -c cipher_spec Selects the cipher specification for encrypting the session. cipher_spec is a comma-separated list of ciphers listed in order of preference. See the Ciphers keyword in ssh_config(5) for more information. -D [bind_address:]port Specifies a local “dynamic” application-level port forwarding. This works by allocating a socket to listen to port on the local side, optionally bound to the specified bind_address. Whenever a connection is made to this port, the connection is forwarded over the secure channel, and the application protocol is then used to determine where to connect to from the remote machine. Currently the SOCKS4 and SOCKS5 protocols are supported, and ssh will act as a SOCKS server. Only root can forward privileged ports. Dynamic port forwardings can also be specified in the configuration file. IPv6 addresses can be specified by enclosing the address in square brackets. Only the superuser can forward privileged ports. By default, the local port is bound in accordance with the GatewayPorts setting. However, an explicit bind_address may be used to bind the connection to a specific address. The bind_address of “localhost” indicates that the listening port be bound for local use only, while an empty address or ‘*’ indicates that the port should be available from all interfaces. -E log_file Append debug logs to log_file instead of standard error. -e escape_char Sets the escape character for sessions with a pty (default: ‘~’). The escape character is only recognized at the beginning of a line. The escape character followed by a dot (‘.’) closes the connection; followed by control-Z suspends the connection; and followed by itself sends the escape character once. Setting the character to “none” disables any escapes and makes the session fully transparent. -F configfile Specifies an alternative per-user configuration file. If a configuration file is given on the command line, the system-wide configuration file (/etc/ssh/ssh_config) will be ignored. The default for the per-user configuration file is ~/.ssh/config. If set to “none”, no configuration files will be read. -f Requests ssh to go to background just before command execution. This is useful if ssh is going to ask for passwords or passphrases, but the user wants it in the background. This implies -n. The recommended way to start X11 programs at a remote site is with something like ssh -f host xterm. If the ExitOnForwardFailure configuration option is set to “yes”, then a client started with -f will wait for all remote port forwards to be successfully established before placing itself in the background. Refer to the description of ForkAfterAuthentication in ssh_config(5) for details. -G Causes ssh to print its configuration after evaluating Host and Match blocks and exit. -g Allows remote hosts to connect to local forwarded ports. If used on a multiplexed connection, then this option must be specified on the master process. -I pkcs11 Specify the PKCS#11 shared library ssh should use to communicate with a PKCS#11 token providing keys for user authentication. -i identity_file Selects a file from which the identity (private key) for public key authentication is read. You can also specify a public key file to use the corresponding private key that is loaded in ssh-agent(1) when the private key file is not present locally. The default is ~/.ssh/id_rsa, ~/.ssh/id_ecdsa, ~/.ssh/id_ecdsa_sk, ~/.ssh/id_ed25519, ~/.ssh/id_ed25519_sk and ~/.ssh/id_dsa. Identity files may also be specified on a per-host basis in the configuration file. It is possible to have multiple -i options (and multiple identities specified in configuration files). If no certificates have been explicitly specified by the CertificateFile directive, ssh will also try to load certificate information from the filename obtained by appending -cert.pub to identity filenames. -J destination Connect to the target host by first making a ssh connection to the jump host described by destination and then establishing a TCP forwarding to the ultimate destination from there. Multiple jump hops may be specified separated by comma characters. This is a shortcut to specify a ProxyJump configuration directive. Note that configuration directives supplied on the command-line generally apply to the destination host and not any specified jump hosts. Use ~/.ssh/config to specify configuration for jump hosts. -K Enables GSSAPI-based authentication and forwarding (delegation) of GSSAPI credentials to the server. -k Disables forwarding (delegation) of GSSAPI credentials to the server. -L [bind_address:]port:host:hostport -L [bind_address:]port:remote_socket -L local_socket:host:hostport -L local_socket:remote_socket Specifies that connections to the given TCP port or Unix socket on the local (client) host are to be forwarded to the given host and port, or Unix socket, on the remote side. This works by allocating a socket to listen to either a TCP port on the local side, optionally bound to the specified bind_address, or to a Unix socket. Whenever a connection is made to the local port or socket, the connection is forwarded over the secure channel, and a connection is made to either host port hostport, or the Unix socket remote_socket, from the remote machine. Port forwardings can also be specified in the configuration file. Only the superuser can forward privileged ports. IPv6 addresses can be specified by enclosing the address in square brackets. By default, the local port is bound in accordance with the GatewayPorts setting. However, an explicit bind_address may be used to bind the connection to a specific address. The bind_address of “localhost” indicates that the listening port be bound for local use only, while an empty address or ‘*’ indicates that the port should be available from all interfaces. -l login_name Specifies the user to log in as on the remote machine. This also may be specified on a per-host basis in the configuration file. -M Places the ssh client into “master” mode for connection sharing. Multiple -M options places ssh into “master” mode but with confirmation required using ssh-askpass(1) before each operation that changes the multiplexing state (e.g. opening a new session). Refer to the description of ControlMaster in ssh_config(5) for details. -m mac_spec A comma-separated list of MAC (message authentication code) algorithms, specified in order of preference. See the MACs keyword in ssh_config(5) for more information. -N Do not execute a remote command. This is useful for just forwarding ports. Refer to the description of SessionType in ssh_config(5) for details. -n Redirects stdin from /dev/null (actually, prevents reading from stdin). This must be used when ssh is run in the background. A common trick is to use this to run X11 programs on a remote machine. For example, ssh -n shadows.cs.hut.fi emacs & will start an emacs on shadows.cs.hut.fi, and the X11 connection will be automatically forwarded over an encrypted channel. The ssh program will be put in the background. (This does not work if ssh needs to ask for a password or passphrase; see also the -f option.) Refer to the description of StdinNull in ssh_config(5) for details. -O ctl_cmd Control an active connection multiplexing master process. When the -O option is specified, the ctl_cmd argument is interpreted and passed to the master process. Valid commands are: “check” (check that the master process is running), “forward” (request forwardings without command execution), “cancel” (cancel forwardings), “exit” (request the master to exit), and “stop” (request the master to stop accepting further multiplexing requests). -o option Can be used to give options in the format used in the configuration file. This is useful for specifying options for which there is no separate command-line flag. For full details of the options listed below, and their possible values, see ssh_config(5). AddKeysToAgent AddressFamily BatchMode BindAddress CanonicalDomains CanonicalizeFallbackLocal CanonicalizeHostname CanonicalizeMaxDots CanonicalizePermittedCNAMEs CASignatureAlgorithms CertificateFile CheckHostIP Ciphers ClearAllForwardings Compression ConnectionAttempts ConnectTimeout ControlMaster ControlPath ControlPersist DynamicForward EnableEscapeCommandline EscapeChar ExitOnForwardFailure FingerprintHash ForkAfterAuthentication ForwardAgent ForwardX11 ForwardX11Timeout ForwardX11Trusted GatewayPorts GlobalKnownHostsFile GSSAPIAuthentication GSSAPIDelegateCredentials HashKnownHosts Host HostbasedAcceptedAlgorithms HostbasedAuthentication HostKeyAlgorithms HostKeyAlias Hostname IdentitiesOnly IdentityAgent IdentityFile IPQoS KbdInteractiveAuthentication KbdInteractiveDevices KexAlgorithms KnownHostsCommand LocalCommand LocalForward LogLevel MACs Match NoHostAuthenticationForLocalhost NumberOfPasswordPrompts PasswordAuthentication PermitLocalCommand PermitRemoteOpen PKCS11Provider Port PreferredAuthentications ProxyCommand ProxyJump ProxyUseFdpass PubkeyAcceptedAlgorithms PubkeyAuthentication RekeyLimit RemoteCommand RemoteForward RequestTTY RequiredRSASize SendEnv ServerAliveInterval ServerAliveCountMax SessionType SetEnv StdinNull StreamLocalBindMask StreamLocalBindUnlink StrictHostKeyChecking TCPKeepAlive Tunnel TunnelDevice UpdateHostKeys User UserKnownHostsFile VerifyHostKeyDNS VisualHostKey XAuthLocation -p port Port to connect to on the remote host. This can be specified on a per-host basis in the configuration file. -Q query_option Queries for the algorithms supported by one of the following features: cipher (supported symmetric ciphers), cipher-auth (supported symmetric ciphers that support authenticated encryption), help (supported query terms for use with the -Q flag), mac (supported message integrity codes), kex (key exchange algorithms), key (key types), key-cert (certificate key types), key-plain (non- certificate key types), key-sig (all key types and signature algorithms), protocol-version (supported SSH protocol versions), and sig (supported signature algorithms). Alternatively, any keyword from ssh_config(5) or sshd_config(5) that takes an algorithm list may be used as an alias for the corresponding query_option. -q Quiet mode. Causes most warning and diagnostic messages to be suppressed. -R [bind_address:]port:host:hostport -R [bind_address:]port:local_socket -R remote_socket:host:hostport -R remote_socket:local_socket -R [bind_address:]port Specifies that connections to the given TCP port or Unix socket on the remote (server) host are to be forwarded to the local side. This works by allocating a socket to listen to either a TCP port or to a Unix socket on the remote side. Whenever a connection is made to this port or Unix socket, the connection is forwarded over the secure channel, and a connection is made from the local machine to either an explicit destination specified by host port hostport, or local_socket, or, if no explicit destination was specified, ssh will act as a SOCKS 4/5 proxy and forward connections to the destinations requested by the remote SOCKS client. Port forwardings can also be specified in the configuration file. Privileged ports can be forwarded only when logging in as root on the remote machine. IPv6 addresses can be specified by enclosing the address in square brackets. By default, TCP listening sockets on the server will be bound to the loopback interface only. This may be overridden by specifying a bind_address. An empty bind_address, or the address ‘*’, indicates that the remote socket should listen on all interfaces. Specifying a remote bind_address will only succeed if the server's GatewayPorts option is enabled (see sshd_config(5)). If the port argument is ‘0’, the listen port will be dynamically allocated on the server and reported to the client at run time. When used together with -O forward, the allocated port will be printed to the standard output. -S ctl_path Specifies the location of a control socket for connection sharing, or the string “none” to disable connection sharing. Refer to the description of ControlPath and ControlMaster in ssh_config(5) for details. -s May be used to request invocation of a subsystem on the remote system. Subsystems facilitate the use of SSH as a secure transport for other applications (e.g. sftp(1)). The subsystem is specified as the remote command. Refer to the description of SessionType in ssh_config(5) for details. -T Disable pseudo-terminal allocation. -t Force pseudo-terminal allocation. This can be used to execute arbitrary screen-based programs on a remote machine, which can be very useful, e.g. when implementing menu services. Multiple -t options force tty allocation, even if ssh has no local tty. -V Display the version number and exit. -v Verbose mode. Causes ssh to print debugging messages about its progress. This is helpful in debugging connection, authentication, and configuration problems. Multiple -v options increase the verbosity. The maximum is 3. -W host:port Requests that standard input and output on the client be forwarded to host on port over the secure channel. Implies -N, -T, ExitOnForwardFailure and ClearAllForwardings, though these can be overridden in the configuration file or using -o command line options. -w local_tun[:remote_tun] Requests tunnel device forwarding with the specified tun(4) devices between the client (local_tun) and the server (remote_tun). The devices may be specified by numerical ID or the keyword “any”, which uses the next available tunnel device. If remote_tun is not specified, it defaults to “any”. See also the Tunnel and TunnelDevice directives in ssh_config(5). If the Tunnel directive is unset, it will be set to the default tunnel mode, which is “point-to-point”. If a different Tunnel forwarding mode it desired, then it should be specified before -w. -X Enables X11 forwarding. This can also be specified on a per-host basis in a configuration file. X11 forwarding should be enabled with caution. Users with the ability to bypass file permissions on the remote host (for the user's X authorization database) can access the local X11 display through the forwarded connection. An attacker may then be able to perform activities such as keystroke monitoring. For this reason, X11 forwarding is subjected to X11 SECURITY extension restrictions by default. Refer to the ssh -Y option and the ForwardX11Trusted directive in ssh_config(5) for more information. -x Disables X11 forwarding. -Y Enables trusted X11 forwarding. Trusted X11 forwardings are not subjected to the X11 SECURITY extension controls. -y Send log information using the syslog(3) system module. By default this information is sent to stderr. ssh may additionally obtain configuration data from a per-user configuration file and a system-wide configuration file. The file format and configuration options are described in ssh_config(5).
# ssh > Secure Shell is a protocol used to securely log onto remote systems. It can > be used for logging or executing commands on a remote server. More > information: https://man.openbsd.org/ssh. * Connect to a remote server: `ssh {{username}}@{{remote_host}}` * Connect to a remote server with a specific identity (private key): `ssh -i {{path/to/key_file}} {{username}}@{{remote_host}}` * Connect to a remote server using a specific port: `ssh {{username}}@{{remote_host}} -p {{2222}}` * Run a command on a remote server with a [t]ty allocation allowing interaction with the remote command: `ssh {{username}}@{{remote_host}} -t {{command}} {{command_arguments}}` * SSH tunneling: Dynamic port forwarding (SOCKS proxy on `localhost:1080`): `ssh -D {{1080}} {{username}}@{{remote_host}}` * SSH tunneling: Forward a specific port (`localhost:9999` to `example.org:80`) along with disabling pseudo-[T]ty allocation and executio[N] of remote commands: `ssh -L {{9999}}:{{example.org}}:{{80}} -N -T {{username}}@{{remote_host}}` * SSH jumping: Connect through a jumphost to a remote server (Multiple jump hops may be specified separated by comma characters): `ssh -J {{username}}@{{jump_host}} {{username}}@{{remote_host}}` * Agent forwarding: Forward the authentication information to the remote machine (see `man ssh_config` for available options): `ssh -A {{username}}@{{remote_host}}`
set
If no options or arguments are specified, set shall write the names and values of all shell variables in the collation sequence of the current locale. Each name shall start on a separate line, using the format: "%s=%s\n", <name>, <value> The value string shall be written with appropriate quoting; see the description of shell quoting in Section 2.2, Quoting. The output shall be suitable for reinput to the shell, setting or resetting, as far as possible, the variables that are currently set; read-only variables cannot be reset. When options are specified, they shall set or unset attributes of the shell, as described below. When arguments are specified, they cause positional parameters to be set or unset, as described below. Setting or unsetting attributes and positional parameters are not necessarily related actions, but they can be combined in a single invocation of set. The set special built-in shall support the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines except that options can be specified with either a leading <hyphen-minus> (meaning enable the option) or <plus-sign> (meaning disable it) unless otherwise specified. Implementations shall support the options in the following list in both their <hyphen-minus> and <plus-sign> forms. These options can also be specified as options to sh. -a When this option is on, the export attribute shall be set for each variable to which an assignment is performed; see the Base Definitions volume of POSIX.1‐2017, Section 4.23, Variable Assignment. If the assignment precedes a utility name in a command, the export attribute shall not persist in the current execution environment after the utility completes, with the exception that preceding one of the special built-in utilities causes the export attribute to persist after the built-in has completed. If the assignment does not precede a utility name in the command, or if the assignment is a result of the operation of the getopts or read utilities, the export attribute shall persist until the variable is unset. -b This option shall be supported if the implementation supports the User Portability Utilities option. It shall cause the shell to notify the user asynchronously of background job completions. The following message is written to standard error: "[%d]%c %s%s\n", <job-number>, <current>, <status>, <job-name> where the fields shall be as follows: <current> The character '+' identifies the job that would be used as a default for the fg or bg utilities; this job can also be specified using the job_id "%+" or "%%". The character '-' identifies the job that would become the default if the current default job were to exit; this job can also be specified using the job_id "%-". For other jobs, this field is a <space>. At most one job can be identified with '+' and at most one job can be identified with '-'. If there is any suspended job, then the current job shall be a suspended job. If there are at least two suspended jobs, then the previous job also shall be a suspended job. <job-number> A number that can be used to identify the process group to the wait, fg, bg, and kill utilities. Using these utilities, the job can be identified by prefixing the job number with '%'. <status> Unspecified. <job-name> Unspecified. When the shell notifies the user a job has been completed, it may remove the job's process ID from the list of those known in the current shell execution environment; see Section 2.9.3.1, Examples. Asynchronous notification shall not be enabled by default. -C (Uppercase C.) Prevent existing files from being overwritten by the shell's '>' redirection operator (see Section 2.7.2, Redirecting Output); the ">|" redirection operator shall override this noclobber option for an individual file. -e When this option is on, when any command fails (for any of the reasons listed in Section 2.8.1, Consequences of Shell Errors or by returning an exit status greater than zero), the shell immediately shall exit, as if by executing the exit special built-in utility with no arguments, with the following exceptions: 1. The failure of any individual command in a multi- command pipeline shall not cause the shell to exit. Only the failure of the pipeline itself shall be considered. 2. The -e setting shall be ignored when executing the compound list following the while, until, if, or elif reserved word, a pipeline beginning with the ! reserved word, or any command of an AND-OR list other than the last. 3. If the exit status of a compound command other than a subshell command was the result of a failure while -e was being ignored, then -e shall not apply to this command. This requirement applies to the shell environment and each subshell environment separately. For example, in: set -e; (false; echo one) | cat; echo two the false command causes the subshell to exit without executing echo one; however, echo two is executed because the exit status of the pipeline (false; echo one) | cat is zero. -f The shell shall disable pathname expansion. -h Locate and remember utilities invoked by functions as those functions are defined (the utilities are normally located when the function is executed). -m This option shall be supported if the implementation supports the User Portability Utilities option. All jobs shall be run in their own process groups. Immediately before the shell issues a prompt after completion of the background job, a message reporting the exit status of the background job shall be written to standard error. If a foreground job stops, the shell shall write a message to standard error to that effect, formatted as described by the jobs utility. In addition, if a job changes status other than exiting (for example, if it stops for input or output or is stopped by a SIGSTOP signal), the shell shall write a similar message immediately prior to writing the next prompt. This option is enabled by default for interactive shells. -n The shell shall read commands but does not execute them; this can be used to check for shell script syntax errors. An interactive shell may ignore this option. -o Write the current settings of the options to standard output in an unspecified format. +o Write the current option settings to standard output in a format that is suitable for reinput to the shell as commands that achieve the same options settings. -o option This option is supported if the system supports the User Portability Utilities option. It shall set various options, many of which shall be equivalent to the single option letters. The following values of option shall be supported: allexport Equivalent to -a. errexit Equivalent to -e. ignoreeof Prevent an interactive shell from exiting on end- of-file. This setting prevents accidental logouts when <control>‐D is entered. A user shall explicitly exit to leave the interactive shell. monitor Equivalent to -m. This option is supported if the system supports the User Portability Utilities option. noclobber Equivalent to -C (uppercase C). noglob Equivalent to -f. noexec Equivalent to -n. nolog Prevent the entry of function definitions into the command history; see Command History List. notify Equivalent to -b. nounset Equivalent to -u. verbose Equivalent to -v. vi Allow shell command line editing using the built- in vi editor. Enabling vi mode shall disable any other command line editing mode provided as an implementation extension. It need not be possible to set vi mode on for certain block-mode terminals. xtrace Equivalent to -x. -u When the shell tries to expand an unset parameter other than the '@' and '*' special parameters, it shall write a message to standard error and the expansion shall fail with the consequences specified in Section 2.8.1, Consequences of Shell Errors. -v The shell shall write its input to standard error as it is read. -x The shell shall write to standard error a trace for each command after it expands the command and before it executes it. It is unspecified whether the command that turns tracing off is traced. The default for all these options shall be off (unset) unless stated otherwise in the description of the option or unless the shell was invoked with them on; see sh. The remaining arguments shall be assigned in order to the positional parameters. The special parameter '#' shall be set to reflect the number of positional parameters. All positional parameters shall be unset before any new values are assigned. If the first argument is '-', the results are unspecified. The special argument "--" immediately following the set command name can be used to delimit the arguments if the first argument begins with '+' or '-', or to prevent inadvertent listing of all shell variables when there are no arguments. The command set -- without argument shall unset all positional parameters and set the special parameter '#' to zero. See the DESCRIPTION.
# set > Display, set or unset values of shell attributes and positional parameters. > More information: https://manned.org/set. * Display the names and values of shell variables: `set` * Mark variables that are modified or created for export: `set -a` * Notify of job termination immediately: `set -b` * Set various options, e.g. enable `vi` style line editing: `set -o {{vi}}` * Set the shell to exit as soon as the first error is encountered (mostly used in scripts): `set -e`
cut
The cut utility shall cut out bytes (-b option), characters (-c option), or character-delimited fields (-f option) from each line in one or more files, concatenate them, and write them to standard output. The cut utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The application shall ensure that the option-argument list (see options -b, -c, and -f below) is a <comma>-separated list or <blank>-separated list of positive numbers and ranges. Ranges can be in three forms. The first is two positive numbers separated by a <hyphen-minus> (low-high), which represents all fields from the first number to the second number. The second is a positive number preceded by a <hyphen-minus> (-high), which represents all fields from field number 1 to that number. The third is a positive number followed by a <hyphen-minus> (low-), which represents that number to the last field, inclusive. The elements in list can be repeated, can overlap, and can be specified in any order, but the bytes, characters, or fields selected shall be written in the order of the input data. If an element appears in the selection list more than once, it shall be written exactly once. The following options shall be supported: -b list Cut based on a list of bytes. Each selected byte shall be output unless the -n option is also specified. It shall not be an error to select bytes not present in the input line. -c list Cut based on a list of characters. Each selected character shall be output. It shall not be an error to select characters not present in the input line. -d delim Set the field delimiter to the character delim. The default is the <tab>. -f list Cut based on a list of fields, assumed to be separated in the file by a delimiter character (see -d). Each selected field shall be output. Output fields shall be separated by a single occurrence of the field delimiter character. Lines with no field delimiters shall be passed through intact, unless -s is specified. It shall not be an error to select fields not present in the input line. -n Do not split characters. When specified with the -b option, each element in list of the form low-high (<hyphen-minus>-separated numbers) shall be modified as follows: * If the byte selected by low is not the first byte of a character, low shall be decremented to select the first byte of the character originally selected by low. If the byte selected by high is not the last byte of a character, high shall be decremented to select the last byte of the character prior to the character originally selected by high, or zero if there is no prior character. If the resulting range element has high equal to zero or low greater than high, the list element shall be dropped from list for that input line without causing an error. Each element in list of the form low- shall be treated as above with high set to the number of bytes in the current line, not including the terminating <newline>. Each element in list of the form -high shall be treated as above with low set to 1. Each element in list of the form num (a single number) shall be treated as above with low set to num and high set to num. -s Suppress lines with no delimiter characters, when used with the -f option. Unless specified, lines with no delimiters shall be passed through untouched.
# cut > Cut out fields from `stdin` or files. More information: > https://manned.org/man/freebsd-13.0/cut.1. * Print a specific character/field range of each line: `{{command}} | cut -{{c|f}} {{1|1,10|1-10|1-|-10}}` * Print a range of each line with a specific delimiter: `{{command}} | cut -d "{{,}}" -{{c}} {{1}}` * Print a range of each line of a specific file: `cut -{{c}} {{1}} {{path/to/file}}`
chfn
chfn is used to change your finger information. This information is stored in the /etc/passwd file, and is displayed by the finger program. The Linux finger command will display four pieces of information that can be changed by chfn: your real name, your work room and phone, and your home phone. Any of the four pieces of information can be specified on the command line. If no information is given on the command line, chfn enters interactive mode. In interactive mode, chfn will prompt for each field. At a prompt, you can enter the new information, or just press return to leave the field unchanged. Enter the keyword "none" to make the field blank. chfn supports non-local entries (kerberos, LDAP, etc.) if linked with libuser, otherwise use ypchfn(1), lchfn(1) or any other implementation for non-local entries. -f, --full-name full-name Specify your real name. -o, --office office Specify your office room number. -p, --office-phone office-phone Specify your office phone number. -h, --home-phone home-phone Specify your home phone number. -u, --help Display help text and exit. -V, --version Print version and exit. The short options -V have been used since version 2.39; old versions use deprecated -v. -h, --help Display help text and exit. -V, --version Print version and exit.
# chfn > Update `finger` info for a user. More information: https://manned.org/chfn. * Update a user's "Name" field in the output of `finger`: `chfn -f {{new_display_name}} {{username}}` * Update a user's "Office Room Number" field for the output of `finger`: `chfn -o {{new_office_room_number}} {{username}}` * Update a user's "Office Phone Number" field for the output of `finger`: `chfn -p {{new_office_telephone_number}} {{username}}` * Update a user's "Home Phone Number" field for the output of `finger`: `chfn -h {{new_home_telephone_number}} {{username}}`
taskset
The taskset command is used to set or retrieve the CPU affinity of a running process given its pid, or to launch a new command with a given CPU affinity. CPU affinity is a scheduler property that "bonds" a process to a given set of CPUs on the system. The Linux scheduler will honor the given CPU affinity and the process will not run on any other CPUs. Note that the Linux scheduler also supports natural CPU affinity: the scheduler attempts to keep processes on the same CPU as long as practical for performance reasons. Therefore, forcing a specific CPU affinity is useful only in certain applications. The affinity of some processes like kernel per-CPU threads cannot be set. The CPU affinity is represented as a bitmask, with the lowest order bit corresponding to the first logical CPU and the highest order bit corresponding to the last logical CPU. Not all CPUs may exist on a given system but a mask may specify more CPUs than are present. A retrieved mask will reflect only the bits that correspond to CPUs physically on the system. If an invalid mask is given (i.e., one that corresponds to no valid CPUs on the current system) an error is returned. The masks may be specified in hexadecimal (with or without a leading "0x"), or as a CPU list with the --cpu-list option. For example, 0x00000001 is processor #0, 0x00000003 is processors #0 and #1, FFFFFFFF is processors #0 through #31, 0x32 is processors #1, #4, and #5, --cpu-list 0-2,6 is processors #0, #1, #2, and #6. --cpu-list 0-10:2 is processors #0, #2, #4, #6, #8 and #10. The suffix ":N" specifies stride in the range, for example 0-10:3 is interpreted as 0,3,6,9 list. When taskset returns, it is guaranteed that the given program has been scheduled to a legal CPU. -a, --all-tasks Set or retrieve the CPU affinity of all the tasks (threads) for a given PID. -c, --cpu-list Interpret mask as numerical list of processors instead of a bitmask. Numbers are separated by commas and may include ranges. For example: 0,5,8-11. -p, --pid Operate on an existing PID and do not launch a new task. -h, --help Display help text and exit. -V, --version Print version and exit.
# taskset > Get or set a process' CPU affinity or start a new process with a defined CPU > affinity. More information: https://manned.org/taskset. * Get a running process' CPU affinity by PID: `taskset --pid --cpu-list {{pid}}` * Set a running process' CPU affinity by PID: `taskset --pid --cpu-list {{cpu_id}} {{pid}}` * Start a new process with affinity for a single CPU: `taskset --cpu-list {{cpu_id}} {{command}}` * Start a new process with affinity for multiple non-sequential CPUs: `taskset --cpu-list {{cpu_id_1}},{{cpu_id_2}},{{cpu_id_3}}` * Start a new process with affinity for CPUs 1 through 4: `taskset --cpu-list {{cpu_id_1}}-{{cpu_id_4}}`
script
script makes a typescript of everything on your terminal session. The terminal data are stored in raw form to the log file and information about timing to another (optional) structured log file. The timing log file is necessary to replay the session later by scriptreplay(1) and to store additional information about the session. Since version 2.35, script supports multiple streams and allows the logging of input and output to separate files or all the one file. This version also supports a new timing file which records additional information. The command scriptreplay --summary then provides all the information. If the argument file or option --log-out file is given, script saves the dialogue in this file. If no filename is given, the dialogue is saved in the file typescript. Note that logging input using --log-in or --log-io may record security-sensitive information as the log file contains all terminal session input (e.g., passwords) independently of the terminal echo flag setting. Below, the size argument may be followed by the multiplicative suffixes KiB (=1024), MiB (=1024*1024), and so on for GiB, TiB, PiB, EiB, ZiB and YiB (the "iB" is optional, e.g., "K" has the same meaning as "KiB"), or the suffixes KB (=1000), MB (=1000*1000), and so on for GB, TB, PB, EB, ZB and YB. -a, --append Append the output to file or to typescript, retaining the prior contents. -c, --command command Run the command rather than an interactive shell. This makes it easy for a script to capture the output of a program that behaves differently when its stdout is not a tty. -E, --echo when This option controls the ECHO flag for the slave end of the session’s pseudoterminal. The supported modes are always, never, or auto. The default is auto — in this case, ECHO enabled for the pseudoterminal slave; if the current standard input is a terminal, ECHO is disabled for it to prevent double echo; if the current standard input is not a terminal (for example pipe: echo date | script) then keeping ECHO enabled for the pseudoterminal slave enables the standard input data to be viewed on screen while being recorded to session log simultaneously. Note that 'never' mode affects content of the session output log, because users input is not repeated on output. -e, --return Return the exit status of the child process. Uses the same format as bash termination on signal termination (i.e., exit status is 128 + the signal number). The exit status of the child process is always stored in the type script file too. -f, --flush Flush output after each write. This is nice for telecooperation: one person does mkfifo foo; script -f foo, and another can supervise in real-time what is being done using cat foo. Note that flush has an impact on performance; it’s possible to use SIGUSR1 to flush logs on demand. --force Allow the default output file typescript to be a hard or symbolic link. The command will follow a symbolic link. -B, --log-io file Log input and output to the same file. Note, this option makes sense only if --log-timing is also specified, otherwise it’s impossible to separate output and input streams from the log file. -I, --log-in file Log input to the file. The log output is disabled if only --log-in specified. Use this logging functionality carefully as it logs all input, including input when terminal has disabled echo flag (for example, password inputs). -O, --log-out file Log output to the file. The default is to log output to the file with name typescript if the option --log-out or --log-in is not given. The log output is disabled if only --log-in specified. -T, --log-timing file Log timing information to the file. Two timing file formats are supported now. The classic format is used when only one stream (input or output) logging is enabled. The multi-stream format is used on --log-io or when --log-in and --log-out are used together. See also --logging-format. -m, --logging-format format Force use of advanced or classic timing log format. The default is the classic format to log only output and the advanced format when input as well as output logging is requested. Classic format The timing log contains two fields, separated by a space. The first field indicates how much time elapsed since the previous output. The second field indicates how many characters were output this time. Advanced (multi-stream) format The first field is an entry type identifier ('I’nput, 'O’utput, 'H’eader, 'S’ignal). The second field is how much time elapsed since the previous entry, and the rest of the entry is type-specific data. -o, --output-limit size Limit the size of the typescript and timing files to size and stop the child process after this size is exceeded. The calculated file size does not include the start and done messages that the script command prepends and appends to the child process output. Due to buffering, the resulting output file might be larger than the specified value. -q, --quiet Be quiet (do not write start and done messages to standard output). -t[file], --timing[=file] Output timing data to standard error, or to file when given. This option is deprecated in favour of --log-timing where the file argument is not optional. -h, --help Display help text and exit. -V, --version Print version and exit.
# script > Make a typescript file of a terminal session. More information: > https://manned.org/script. * Start recording in file named "typescript": `script` * Stop recording: `exit` * Start recording in a given file: `script {{logfile.log}}` * Append to an existing file: `script -a {{logfile.log}}` * Execute quietly without start and done messages: `script -q {{logfile.log}}`
chown
The chown utility shall set the user ID of the file named by each file operand to the user ID specified by the owner operand. For each file operand, or, if the -R option is used, each file encountered while walking the directory trees specified by the file operands, the chown utility shall perform actions equivalent to the chown() function defined in the System Interfaces volume of POSIX.1‐2017, called with the following arguments: 1. The file operand shall be used as the path argument. 2. The user ID indicated by the owner portion of the first operand shall be used as the owner argument. 3. If the group portion of the first operand is given, the group ID indicated by it shall be used as the group argument; otherwise, the group ownership shall not be changed. Unless chown is invoked by a process with appropriate privileges, the set-user-ID and set-group-ID bits of a regular file shall be cleared upon successful completion; the set-user-ID and set- group-ID bits of other file types may be cleared. The chown utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following options shall be supported by the implementation: -h For each file operand that names a file of type symbolic link, chown shall attempt to set the user ID of the symbolic link. If a group ID was specified, for each file operand that names a file of type symbolic link, chown shall attempt to set the group ID of the symbolic link. -H If the -R option is specified and a symbolic link referencing a file of type directory is specified on the command line, chown shall change the user ID (and group ID, if specified) of the directory referenced by the symbolic link and all files in the file hierarchy below it. -L If the -R option is specified and a symbolic link referencing a file of type directory is specified on the command line or encountered during the traversal of a file hierarchy, chown shall change the user ID (and group ID, if specified) of the directory referenced by the symbolic link and all files in the file hierarchy below it. -P If the -R option is specified and a symbolic link is specified on the command line or encountered during the traversal of a file hierarchy, chown shall change the owner ID (and group ID, if specified) of the symbolic link. The chown utility shall not follow the symbolic link to any other part of the file hierarchy. -R Recursively change file user and group IDs. For each file operand that names a directory, chown shall change the user ID (and group ID, if specified) of the directory and all files in the file hierarchy below it. Unless a -H, -L, or -P option is specified, it is unspecified which of these options will be used as the default. Specifying more than one of the mutually-exclusive options -H, -L, and -P shall not be considered an error. The last option specified shall determine the behavior of the utility.
# chown > Change user and group ownership of files and directories. More information: > https://www.gnu.org/software/coreutils/chown. * Change the owner user of a file/directory: `chown {{user}} {{path/to/file_or_directory}}` * Change the owner user and group of a file/directory: `chown {{user}}:{{group}} {{path/to/file_or_directory}}` * Recursively change the owner of a directory and its contents: `chown -R {{user}} {{path/to/directory}}` * Change the owner of a symbolic link: `chown -h {{user}} {{path/to/symlink}}` * Change the owner of a file/directory to match a reference file: `chown --reference={{path/to/reference_file}} {{path/to/file_or_directory}}`
g++
When you invoke GCC, it normally does preprocessing, compilation, assembly and linking. The "overall options" allow you to stop this process at an intermediate stage. For example, the -c option says not to run the linker. Then the output consists of object files output by the assembler. Other options are passed on to one or more stages of processing. Some options control the preprocessor and others the compiler itself. Yet other options control the assembler and linker; most of these are not documented here, since you rarely need to use any of them. Most of the command-line options that you can use with GCC are useful for C programs; when an option is only useful with another language (usually C++), the explanation says so explicitly. If the description for a particular option does not mention a source language, you can use that option with all supported languages. The usual way to run GCC is to run the executable called gcc, or machine-gcc when cross-compiling, or machine-gcc-version to run a specific version of GCC. When you compile C++ programs, you should invoke GCC as g++ instead. The gcc program accepts options and file names as operands. Many options have multi-letter names; therefore multiple single-letter options may not be grouped: -dv is very different from -d -v. You can mix options and other arguments. For the most part, the order you use doesn't matter. Order does matter when you use several options of the same kind; for example, if you specify -L more than once, the directories are searched in the order specified. Also, the placement of the -l option is significant. Many options have long names starting with -f or with -W---for example, -fmove-loop-invariants, -Wformat and so on. Most of these have both positive and negative forms; the negative form of -ffoo is -fno-foo. This manual documents only one of these two forms, whichever one is not the default. Some options take one or more arguments typically separated either by a space or by the equals sign (=) from the option name. Unless documented otherwise, an argument can be either numeric or a string. Numeric arguments must typically be small unsigned decimal or hexadecimal integers. Hexadecimal arguments must begin with the 0x prefix. Arguments to options that specify a size threshold of some sort may be arbitrarily large decimal or hexadecimal integers followed by a byte size suffix designating a multiple of bytes such as "kB" and "KiB" for kilobyte and kibibyte, respectively, "MB" and "MiB" for megabyte and mebibyte, "GB" and "GiB" for gigabyte and gigibyte, and so on. Such arguments are designated by byte-size in the following text. Refer to the NIST, IEC, and other relevant national and international standards for the full listing and explanation of the binary and decimal byte size prefixes. Option Summary Here is a summary of all the options, grouped by type. Explanations are in the following sections. Overall Options -c -S -E -o file -x language -v -
# g++ > Compiles C++ source files. Part of GCC (GNU Compiler Collection). More > information: https://gcc.gnu.org. * Compile a source code file into an executable binary: `g++ {{path/to/source.cpp}} -o {{path/to/output_executable}}` * Display common warnings: `g++ {{path/to/source.cpp}} -Wall -o {{path/to/output_executable}}` * Choose a language standard to compile for (C++98/C++11/C++14/C++17): `g++ {{path/to/source.cpp}} -std={{c++98|c++11|c++14|c++17}} -o {{path/to/output_executable}}` * Include libraries located at a different path than the source file: `g++ {{path/to/source.cpp}} -o {{path/to/output_executable}} -I{{path/to/header}} -L{{path/to/library}} -l{{library_name}}` * Compile and link multiple source code files into an executable binary: `g++ -c {{path/to/source_1.cpp path/to/source_2.cpp ...}} && g++ -o {{path/to/output_executable}} {{path/to/source_1.o path/to/source_2.o ...}}` * Display version: `g++ --version`
cp
Copy SOURCE to DEST, or multiple SOURCE(s) to DIRECTORY. Mandatory arguments to long options are mandatory for short options too. -a, --archive same as -dR --preserve=all --attributes-only don't copy the file data, just the attributes --backup[=CONTROL] make a backup of each existing destination file -b like --backup but does not accept an argument --copy-contents copy contents of special files when recursive -d same as --no-dereference --preserve=links --debug explain how a file is copied. Implies -v -f, --force if an existing destination file cannot be opened, remove it and try again (this option is ignored when the -n option is also used) -i, --interactive prompt before overwrite (overrides a previous -n option) -H follow command-line symbolic links in SOURCE -l, --link hard link files instead of copying -L, --dereference always follow symbolic links in SOURCE -n, --no-clobber do not overwrite an existing file (overrides a -u or previous -i option). See also --update -P, --no-dereference never follow symbolic links in SOURCE -p same as --preserve=mode,ownership,timestamps --preserve[=ATTR_LIST] preserve the specified attributes --no-preserve=ATTR_LIST don't preserve the specified attributes --parents use full source file name under DIRECTORY -R, -r, --recursive copy directories recursively --reflink[=WHEN] control clone/CoW copies. See below --remove-destination remove each existing destination file before attempting to open it (contrast with --force) --sparse=WHEN control creation of sparse files. See below --strip-trailing-slashes remove any trailing slashes from each SOURCE argument -s, --symbolic-link make symbolic links instead of copying -S, --suffix=SUFFIX override the usual backup suffix -t, --target-directory=DIRECTORY copy all SOURCE arguments into DIRECTORY -T, --no-target-directory treat DEST as a normal file --update[=UPDATE] control which existing files are updated; UPDATE={all,none,older(default)}. See below -u equivalent to --update[=older] -v, --verbose explain what is being done -x, --one-file-system stay on this file system -Z set SELinux security context of destination file to default type --context[=CTX] like -Z, or if CTX is specified then set the SELinux or SMACK security context to CTX --help display this help and exit --version output version information and exit ATTR_LIST is a comma-separated list of attributes. Attributes are 'mode' for permissions (including any ACL and xattr permissions), 'ownership' for user and group, 'timestamps' for file timestamps, 'links' for hard links, 'context' for security context, 'xattr' for extended attributes, and 'all' for all attributes. By default, sparse SOURCE files are detected by a crude heuristic and the corresponding DEST file is made sparse as well. That is the behavior selected by --sparse=auto. Specify --sparse=always to create a sparse DEST file whenever the SOURCE file contains a long enough sequence of zero bytes. Use --sparse=never to inhibit creation of sparse files. UPDATE controls which existing files in the destination are replaced. 'all' is the default operation when an --update option is not specified, and results in all existing files in the destination being replaced. 'none' is similar to the --no-clobber option, in that no files in the destination are replaced, but also skipped files do not induce a failure. 'older' is the default operation when --update is specified, and results in files being replaced if they're older than the corresponding source file. When --reflink[=always] is specified, perform a lightweight copy, where the data blocks are copied only when modified. If this is not possible the copy fails, or if --reflink=auto is specified, fall back to a standard copy. Use --reflink=never to ensure a standard copy is performed. The backup suffix is '~', unless set with --suffix or SIMPLE_BACKUP_SUFFIX. The version control method may be selected via the --backup option or through the VERSION_CONTROL environment variable. Here are the values: none, off never make backups (even if --backup is given) numbered, t make numbered backups existing, nil numbered if numbered backups exist, simple otherwise simple, never always make simple backups As a special case, cp makes a backup of SOURCE when the force and backup options are given and SOURCE and DEST are the same name for an existing, regular file.
# cp > Copy files and directories. More information: > https://www.gnu.org/software/coreutils/cp. * Copy a file to another location: `cp {{path/to/source_file.ext}} {{path/to/target_file.ext}}` * Copy a file into another directory, keeping the filename: `cp {{path/to/source_file.ext}} {{path/to/target_parent_directory}}` * Recursively copy a directory's contents to another location (if the destination exists, the directory is copied inside it): `cp -R {{path/to/source_directory}} {{path/to/target_directory}}` * Copy a directory recursively, in verbose mode (shows files as they are copied): `cp -vR {{path/to/source_directory}} {{path/to/target_directory}}` * Copy multiple files at once to a directory: `cp -t {{path/to/destination_directory}} {{path/to/file1 path/to/file2 ...}}` * Copy text files to another location, in interactive mode (prompts user before overwriting): `cp -i {{*.txt}} {{path/to/target_directory}}` * Follow symbolic links before copying: `cp -L {{link}} {{path/to/target_directory}}` * Use the first argument as the destination directory (useful for `xargs ... | cp -t <DEST_DIR>`): `cp -t {{path/to/target_directory}} {{path/to/file_or_directory1 path/to/file_or_directory2 ...}}`
sar
The sar command writes to standard output the contents of selected cumulative activity counters in the operating system. The accounting system, based on the values in the count and interval parameters, writes information the specified number of times spaced at the specified intervals in seconds. If the interval parameter is set to zero, the sar command displays the average statistics for the time since the system was started. If the interval parameter is specified without the count parameter, then reports are generated continuously. The collected data can also be saved in the file specified by the -o filename flag, in addition to being displayed onto the screen. If filename is omitted, sar uses the standard system activity daily data file (see below). By default all the data available from the kernel are saved in the data file. The sar command extracts and writes to standard output records previously saved in a file. This file can be either the one specified by the -f flag or, by default, the standard system activity daily data file. It is also possible to enter -1, -2 etc. as an argument to sar to display data of that days ago. For example, -1 will point at the standard system activity file of yesterday. Standard system activity daily data files are named saDD or saYYYYMMDD, where YYYY stands for the current year, MM for the current month and DD for the current day. They are the default files used by sar only when no filename has been explicitly specified. When used to write data to files (with its option -o), sar will use saYYYYMMDD if option -D has also been specified, else it will use saDD. When used to display the records previously saved in a file, sar will look for the most recent of saDD and saYYYYMMDD, and use it. Standard system activity daily data files are located in the /var/log/sa directory by default. Yet it is possible to specify an alternate location for them: If a directory (instead of a plain file) is used with options -f or -o then it will be considered as the directory containing the data files. Without the -P flag, the sar command reports system-wide (global among all processors) statistics, which are calculated as averages for values expressed as percentages, and as sums otherwise. If the -P flag is given, the sar command reports activity which relates to the specified processor or processors. If -P ALL is given, the sar command reports statistics for each individual processor and global statistics among all processors. Offline processors are not displayed. You can select information about specific system activities using flags. Not specifying any flags selects only CPU activity. Specifying the -A flag selects all possible activities. The default version of the sar command (CPU utilization report) might be one of the first facilities the user runs to begin system activity investigation, because it monitors major system resources. If CPU utilization is near 100 percent (user + nice + system), the workload sampled is CPU-bound. If multiple samples and multiple reports are desired, it is convenient to specify an output file for the sar command. Run the sar command as a background process. The syntax for this is: sar -o datafile interval count >/dev/null 2>&1 & All data are captured in binary form and saved to a file (datafile). The data can then be selectively displayed with the sar command using the -f option. Set the interval and count parameters to select count records at interval second intervals. If the count parameter is not set, all the records saved in the file will be selected. Collection of data in this manner is useful to characterize system usage over a period of time and determine peak usage hours. Note: The sar command only reports on local activities. -A This is equivalent to specifying -bBdFHISvwWy -m ALL -n ALL -q ALL -r ALL -u ALL. This option also implies specifying -I ALL -P ALL unless these options are explicitly set on the command line. -B Report paging statistics. The following values are displayed: pgpgin/s Total number of kilobytes the system paged in from disk per second. pgpgout/s Total number of kilobytes the system paged out to disk per second. fault/s Number of page faults (major + minor) made by the system per second. This is not a count of page faults that generate I/O, because some page faults can be resolved without I/O. majflt/s Number of major faults the system has made per second, those which have required loading a memory page from disk. pgfree/s Number of pages placed on the free list by the system per second. pgscank/s Number of pages scanned by the kswapd daemon per second. pgscand/s Number of pages scanned directly per second. pgsteal/s Number of pages the system has reclaimed from cache (pagecache and swapcache) per second to satisfy its memory demands. %vmeff Calculated as pgsteal / pgscan, this is a metric of the efficiency of page reclaim. If it is near 100% then almost every page coming off the tail of the inactive list is being reaped. If it gets too low (e.g. less than 30%) then the virtual memory is having some difficulty. This field is displayed as zero if no pages have been scanned during the interval of time. -b Report I/O and transfer rate statistics. The following values are displayed: tps Total number of transfers per second that were issued to physical devices. A transfer is an I/O request to a physical device. Multiple logical requests can be combined into a single I/O request to the device. A transfer is of indeterminate size. rtps Total number of read requests per second issued to physical devices. wtps Total number of write requests per second issued to physical devices. dtps Total number of discard requests per second issued to physical devices. bread/s Total amount of data read from the devices in blocks per second. Blocks are equivalent to sectors and therefore have a size of 512 bytes. bwrtn/s Total amount of data written to devices in blocks per second. bdscd/s Total amount of data discarded for devices in blocks per second. -C When reading data from a file, tell sar to display comments that have been inserted by sadc. -D Use saYYYYMMDD instead of saDD as the standard system activity daily data file name. This option works only when used in conjunction with option -o to save data to file. -d Report activity for each block device. When data are displayed, the device name is displayed as it (should) appear in /dev. sar uses data in /sys to determine the device name based on its major and minor numbers. If this name resolution fails, sar will use name mapping controlled by /etc/sysconfig/sysstat.ioconf file. Persistent device names can also be printed if option -j is used (see below). Statistics for all devices are displayed unless a restricted list is specified using option --dev= (see corresponding option entry). Note that disk activity depends on sadc's options -S DISK and -S XDISK to be collected. The following values are displayed: tps Total number of transfers per second that were issued to physical devices. A transfer is an I/O request to a physical device. Multiple logical requests can be combined into a single I/O request to the device. A transfer is of indeterminate size. rkB/s Number of kilobytes read from the device per second. wkB/s Number of kilobytes written to the device per second. dkB/s Number of kilobytes discarded for the device per second. areq-sz The average size (in kilobytes) of the I/O requests that were issued to the device. Note: In previous versions, this field was known as avgrq-sz and was expressed in sectors. aqu-sz The average queue length of the requests that were issued to the device. Note: In previous versions, this field was known as avgqu-sz. await The average time (in milliseconds) for I/O requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. %util Percentage of elapsed time during which I/O requests were issued to the device (bandwidth utilization for the device). Device saturation occurs when this value is close to 100% for devices serving requests serially. But for devices serving requests in parallel, such as RAID arrays and modern SSDs, this number does not reflect their performance limits. --dec={ 0 | 1 | 2 } Specify the number of decimal places to use (0 to 2, default value is 2). --dev=dev_list Specify the block devices for which statistics are to be displayed by sar. dev_list is a list of comma-separated device names. -e [ hh:mm[:ss] ] -e [ seconds_since_the_epoch ] Set the ending time of the report. The default ending time is 18:00:00. Hours must be given in 24-hour format, or as the number of seconds since the epoch (given as a 10 digit number). This option can be used when data are read from or written to a file (options -f or -o). -F [ MOUNT ] Display statistics for currently mounted filesystems. Pseudo-filesystems are ignored. At the end of the report, sar will display a summary of all those filesystems. Use of the MOUNT parameter keyword indicates that mountpoint will be reported instead of filesystem device. Statistics for all filesystems are displayed unless a restricted list is specified using option --fs= (see corresponding option entry). Note that filesystems statistics depend on sadc's option -S XDISK to be collected. The following values are displayed: MBfsfree Total amount of free space in megabytes (including space available only to privileged user). MBfsused Total amount of space used in megabytes. %fsused Percentage of filesystem space used, as seen by a privileged user. %ufsused Percentage of filesystem space used, as seen by an unprivileged user. Ifree Total number of free file nodes in filesystem. Iused Total number of file nodes used in filesystem. %Iused Percentage of file nodes used in filesystem. -f [ filename ] Extract records from filename (created by the -o filename flag). The default value of the filename parameter is the current standard system activity daily data file. If filename is a directory instead of a plain file then it is considered as the directory where the standard system activity daily data files are located. Option -f is exclusive of option -o. --fs=fs_list Specify the filesystems for which statistics are to be displayed by sar. fs_list is a list of comma-separated filesystem names or mountpoints. -H Report hugepages utilization statistics. The following values are displayed: kbhugfree Amount of hugepages memory in kilobytes that is not yet allocated. kbhugused Amount of hugepages memory in kilobytes that has been allocated. %hugused Percentage of total hugepages memory that has been allocated. kbhugrsvd Amount of reserved hugepages memory in kilobytes. kbhugsurp Amount of surplus hugepages memory in kilobytes. -h This option is equivalent to specifying --pretty --human. --help Display a short help message then exit. --human Print sizes in human readable format (e.g. 1.0k, 1.2M, etc.) The units displayed with this option supersede any other default units (e.g. kilobytes, sectors...) associated with the metrics. -I [ SUM | ALL ] Report statistics for interrupts. The values displayed are the number of interrupts per second for the given processor or among all processors. A list of interrupts can be specified using --int= (see this option). The SUM keyword indicates that the total number of interrupts received per second is to be displayed. The ALL keyword indicates that statistics from all interrupts are to be reported (this is the default). Note that interrupts statistics depend on sadc's option -S INT to be collected. -i interval Select data records at seconds as close as possible to the number specified by the interval parameter. --iface=iface_list Specify the network interfaces for which statistics are to be displayed by sar. iface_list is a list of comma- separated interface names. --int=int_list Specify the interrupts names for which statistics are to be displayed by sar. int_list is a list of comma- separated values or range of values (e.g., 0-16,35,40-). -j { SID | ID | LABEL | PATH | UUID | ... } Display persistent device names. Use this option in conjunction with option -d. Keywords ID, LABEL, etc. specify the type of the persistent name. These keywords are not limited, only prerequisite is that directory with required persistent names is present in /dev/disk. Keyword SID tries to get a stable identifier to use as the device name. A stable identifier won't change across reboots for the same physical device. If it exists, this identifier is normally the WWN (World Wide Name) of the device, as read from the /dev/disk/by-id directory. -m { keyword[,...] | ALL } Report power management statistics. Note that these statistics depend on sadc's option -S POWER to be collected. Possible keywords are BAT, CPU, FAN, FREQ, IN, TEMP and USB. With the BAT keyword, statistics about batteries capacity are reported. The following values are displayed: %cap Battery capacity. cap/min Capacity lost or gained per minute by the battery. status Charging status of the battery: ↑ (full), ↗ (charging), → (not charging), ↘ (discharging), ? (unknown). With the CPU keyword, statistics about CPU are reported. The following value is displayed: MHz Instantaneous CPU clock frequency in MHz. With the FAN keyword, statistics about fans speed are reported. The following values are displayed: rpm Fan speed expressed in revolutions per minute. drpm This field is calculated as the difference between current fan speed (rpm) and its low limit (fan_min). DEVICE Sensor device name. With the FREQ keyword, statistics about CPU clock frequency are reported. The following value is displayed: wghMHz Weighted average CPU clock frequency in MHz. Note that the cpufreq-stats driver must be compiled in the kernel for this option to work. With the IN keyword, statistics about voltage inputs are reported. The following values are displayed: inV Voltage input expressed in Volts. %in Relative input value. A value of 100% means that voltage input has reached its high limit (in_max) whereas a value of 0% means that it has reached its low limit (in_min). DEVICE Sensor device name. With the TEMP keyword, statistics about devices temperature are reported. The following values are displayed: degC Device temperature expressed in degrees Celsius. %temp Relative device temperature. A value of 100% means that temperature has reached its high limit (temp_max). DEVICE Sensor device name. With the USB keyword, the sar command takes a snapshot of all the USB devices currently plugged into the system. At the end of the report, sar will display a summary of all those USB devices. The following values are displayed: BUS Root hub number of the USB device. idvendor Vendor ID number (assigned by USB organization). idprod Product ID number (assigned by Manufacturer). maxpower Maximum power consumption of the device (expressed in mA). manufact Manufacturer name. product Product name. The ALL keyword is equivalent to specifying all the keywords above and therefore all the power management statistics are reported. -n { keyword[,...] | ALL } Report network statistics. Possible keywords are DEV, EDEV, FC, ICMP, EICMP, ICMP6, EICMP6, IP, EIP, IP6, EIP6, NFS, NFSD, SOCK, SOCK6, SOFT, TCP, ETCP, UDP and UDP6. With the DEV keyword, statistics from the network devices are reported. Statistics for all network interfaces are displayed unless a restricted list is specified using option --iface= (see corresponding option entry). The following values are displayed: IFACE Name of the network interface for which statistics are reported. rxpck/s Total number of packets received per second. txpck/s Total number of packets transmitted per second. rxkB/s Total number of kilobytes received per second. txkB/s Total number of kilobytes transmitted per second. rxcmp/s Number of compressed packets received per second (for cslip etc.). txcmp/s Number of compressed packets transmitted per second. rxmcst/s Number of multicast packets received per second. %ifutil Utilization percentage of the network interface. For half-duplex interfaces, utilization is calculated using the sum of rxkB/s and txkB/s as a percentage of the interface speed. For full-duplex, this is the greater of rxkB/S or txkB/s. With the EDEV keyword, statistics on failures (errors) from the network devices are reported. Statistics for all network interfaces are displayed unless a restricted list is specified using option --iface= (see corresponding option entry). The following values are displayed: IFACE Name of the network interface for which statistics are reported. rxerr/s Total number of bad packets received per second. txerr/s Total number of errors that happened per second while transmitting packets. coll/s Number of collisions that happened per second while transmitting packets. rxdrop/s Number of received packets dropped per second because of a lack of space in linux buffers. txdrop/s Number of transmitted packets dropped per second because of a lack of space in linux buffers. txcarr/s Number of carrier-errors that happened per second while transmitting packets. rxfram/s Number of frame alignment errors that happened per second on received packets. rxfifo/s Number of FIFO overrun errors that happened per second on received packets. txfifo/s Number of FIFO overrun errors that happened per second on transmitted packets. With the FC keyword, statistics about fibre channel traffic are reported. Note that fibre channel statistics depend on sadc's option -S DISK to be collected. The following values are displayed: FCHOST Name of the fibre channel host bus adapter (HBA) interface for which statistics are reported. fch_rxf/s The total number of frames received per second. fch_txf/s The total number of frames transmitted per second. fch_rxw/s The total number of transmission words received per second. fch_txw/s The total number of transmission words transmitted per second. With the ICMP keyword, statistics about ICMPv4 network traffic are reported. Note that ICMPv4 statistics depend on sadc's option -S SNMP to be collected. The following values are displayed (formal SNMP names between square brackets): imsg/s The total number of ICMP messages which the entity received per second [icmpInMsgs]. Note that this counter includes all those counted by ierr/s. omsg/s The total number of ICMP messages which this entity attempted to send per second [icmpOutMsgs]. Note that this counter includes all those counted by oerr/s. iech/s The number of ICMP Echo (request) messages received per second [icmpInEchos]. iechr/s The number of ICMP Echo Reply messages received per second [icmpInEchoReps]. oech/s The number of ICMP Echo (request) messages sent per second [icmpOutEchos]. oechr/s The number of ICMP Echo Reply messages sent per second [icmpOutEchoReps]. itm/s The number of ICMP Timestamp (request) messages received per second [icmpInTimestamps]. itmr/s The number of ICMP Timestamp Reply messages received per second [icmpInTimestampReps]. otm/s The number of ICMP Timestamp (request) messages sent per second [icmpOutTimestamps]. otmr/s The number of ICMP Timestamp Reply messages sent per second [icmpOutTimestampReps]. iadrmk/s The number of ICMP Address Mask Request messages received per second [icmpInAddrMasks]. iadrmkr/s The number of ICMP Address Mask Reply messages received per second [icmpInAddrMaskReps]. oadrmk/s The number of ICMP Address Mask Request messages sent per second [icmpOutAddrMasks]. oadrmkr/s The number of ICMP Address Mask Reply messages sent per second [icmpOutAddrMaskReps]. With the EICMP keyword, statistics about ICMPv4 error messages are reported. Note that ICMPv4 statistics depend on sadc's option -S SNMP to be collected. The following values are displayed (formal SNMP names between square brackets): ierr/s The number of ICMP messages per second which the entity received but determined as having ICMP- specific errors (bad ICMP checksums, bad length, etc.) [icmpInErrors]. oerr/s The number of ICMP messages per second which this entity did not send due to problems discovered within ICMP such as a lack of buffers [icmpOutErrors]. idstunr/s The number of ICMP Destination Unreachable messages received per second [icmpInDestUnreachs]. odstunr/s The number of ICMP Destination Unreachable messages sent per second [icmpOutDestUnreachs]. itmex/s The number of ICMP Time Exceeded messages received per second [icmpInTimeExcds]. otmex/s The number of ICMP Time Exceeded messages sent per second [icmpOutTimeExcds]. iparmpb/s The number of ICMP Parameter Problem messages received per second [icmpInParmProbs]. oparmpb/s The number of ICMP Parameter Problem messages sent per second [icmpOutParmProbs]. isrcq/s The number of ICMP Source Quench messages received per second [icmpInSrcQuenchs]. osrcq/s The number of ICMP Source Quench messages sent per second [icmpOutSrcQuenchs]. iredir/s The number of ICMP Redirect messages received per second [icmpInRedirects]. oredir/s The number of ICMP Redirect messages sent per second [icmpOutRedirects]. With the ICMP6 keyword, statistics about ICMPv6 network traffic are reported. Note that ICMPv6 statistics depend on sadc's option -S IPV6 to be collected. The following values are displayed (formal SNMP names between square brackets): imsg6/s The total number of ICMP messages received by the interface per second which includes all those counted by ierr6/s [ipv6IfIcmpInMsgs]. omsg6/s The total number of ICMP messages which this interface attempted to send per second [ipv6IfIcmpOutMsgs]. iech6/s The number of ICMP Echo (request) messages received by the interface per second [ipv6IfIcmpInEchos]. iechr6/s The number of ICMP Echo Reply messages received by the interface per second [ipv6IfIcmpInEchoReplies]. oechr6/s The number of ICMP Echo Reply messages sent by the interface per second [ipv6IfIcmpOutEchoReplies]. igmbq6/s The number of ICMPv6 Group Membership Query messages received by the interface per second [ipv6IfIcmpInGroupMembQueries]. igmbr6/s The number of ICMPv6 Group Membership Response messages received by the interface per second [ipv6IfIcmpInGroupMembResponses]. ogmbr6/s The number of ICMPv6 Group Membership Response messages sent per second [ipv6IfIcmpOutGroupMembResponses]. igmbrd6/s The number of ICMPv6 Group Membership Reduction messages received by the interface per second [ipv6IfIcmpInGroupMembReductions]. ogmbrd6/s The number of ICMPv6 Group Membership Reduction messages sent per second [ipv6IfIcmpOutGroupMembReductions]. irtsol6/s The number of ICMP Router Solicit messages received by the interface per second [ipv6IfIcmpInRouterSolicits]. ortsol6/s The number of ICMP Router Solicitation messages sent by the interface per second [ipv6IfIcmpOutRouterSolicits]. irtad6/s The number of ICMP Router Advertisement messages received by the interface per second [ipv6IfIcmpInRouterAdvertisements]. inbsol6/s The number of ICMP Neighbor Solicit messages received by the interface per second [ipv6IfIcmpInNeighborSolicits]. onbsol6/s The number of ICMP Neighbor Solicitation messages sent by the interface per second [ipv6IfIcmpOutNeighborSolicits]. inbad6/s The number of ICMP Neighbor Advertisement messages received by the interface per second [ipv6IfIcmpInNeighborAdvertisements]. onbad6/s The number of ICMP Neighbor Advertisement messages sent by the interface per second [ipv6IfIcmpOutNeighborAdvertisements]. With the EICMP6 keyword, statistics about ICMPv6 error messages are reported. Note that ICMPv6 statistics depend on sadc's option -S IPV6 to be collected. The following values are displayed (formal SNMP names between square brackets): ierr6/s The number of ICMP messages per second which the interface received but determined as having ICMP- specific errors (bad ICMP checksums, bad length, etc.) [ipv6IfIcmpInErrors] idtunr6/s The number of ICMP Destination Unreachable messages received by the interface per second [ipv6IfIcmpInDestUnreachs]. odtunr6/s The number of ICMP Destination Unreachable messages sent by the interface per second [ipv6IfIcmpOutDestUnreachs]. itmex6/s The number of ICMP Time Exceeded messages received by the interface per second [ipv6IfIcmpInTimeExcds]. otmex6/s The number of ICMP Time Exceeded messages sent by the interface per second [ipv6IfIcmpOutTimeExcds]. iprmpb6/s The number of ICMP Parameter Problem messages received by the interface per second [ipv6IfIcmpInParmProblems]. oprmpb6/s The number of ICMP Parameter Problem messages sent by the interface per second [ipv6IfIcmpOutParmProblems]. iredir6/s The number of Redirect messages received by the interface per second [ipv6IfIcmpInRedirects]. oredir6/s The number of Redirect messages sent by the interface by second [ipv6IfIcmpOutRedirects]. ipck2b6/s The number of ICMP Packet Too Big messages received by the interface per second [ipv6IfIcmpInPktTooBigs]. opck2b6/s The number of ICMP Packet Too Big messages sent by the interface per second [ipv6IfIcmpOutPktTooBigs]. With the IP keyword, statistics about IPv4 network traffic are reported. Note that IPv4 statistics depend on sadc's option -S SNMP to be collected. The following values are displayed (formal SNMP names between square brackets): irec/s The total number of input datagrams received from interfaces per second, including those received in error [ipInReceives]. fwddgm/s The number of input datagrams per second, for which this entity was not their final IP destination, as a result of which an attempt was made to find a route to forward them to that final destination [ipForwDatagrams]. idel/s The total number of input datagrams successfully delivered per second to IP user-protocols (including ICMP) [ipInDelivers]. orq/s The total number of IP datagrams which local IP user-protocols (including ICMP) supplied per second to IP in requests for transmission [ipOutRequests]. Note that this counter does not include any datagrams counted in fwddgm/s. asmrq/s The number of IP fragments received per second which needed to be reassembled at this entity [ipReasmReqds]. asmok/s The number of IP datagrams successfully re- assembled per second [ipReasmOKs]. fragok/s The number of IP datagrams that have been successfully fragmented at this entity per second [ipFragOKs]. fragcrt/s The number of IP datagram fragments that have been generated per second as a result of fragmentation at this entity [ipFragCreates]. With the EIP keyword, statistics about IPv4 network errors are reported. Note that IPv4 statistics depend on sadc's option -S SNMP to be collected. The following values are displayed (formal SNMP names between square brackets): ihdrerr/s The number of input datagrams discarded per second due to errors in their IP headers, including bad checksums, version number mismatch, other format errors, time-to-live exceeded, errors discovered in processing their IP options, etc. [ipInHdrErrors] iadrerr/s The number of input datagrams discarded per second because the IP address in their IP header's destination field was not a valid address to be received at this entity. This count includes invalid addresses (e.g., 0.0.0.0) and addresses of unsupported Classes (e.g., Class E). For entities which are not IP routers and therefore do not forward datagrams, this counter includes datagrams discarded because the destination address was not a local address [ipInAddrErrors]. iukwnpr/s The number of locally-addressed datagrams received successfully but discarded per second because of an unknown or unsupported protocol [ipInUnknownProtos]. idisc/s The number of input IP datagrams per second for which no problems were encountered to prevent their continued processing, but which were discarded (e.g., for lack of buffer space) [ipInDiscards]. Note that this counter does not include any datagrams discarded while awaiting re-assembly. odisc/s The number of output IP datagrams per second for which no problem was encountered to prevent their transmission to their destination, but which were discarded (e.g., for lack of buffer space) [ipOutDiscards]. Note that this counter would include datagrams counted in fwddgm/s if any such packets met this (discretionary) discard criterion. onort/s The number of IP datagrams discarded per second because no route could be found to transmit them to their destination [ipOutNoRoutes]. Note that this counter includes any packets counted in fwddgm/s which meet this 'no-route' criterion. Note that this includes any datagrams which a host cannot route because all of its default routers are down. asmf/s The number of failures detected per second by the IP re-assembly algorithm (for whatever reason: timed out, errors, etc) [ipReasmFails]. Note that this is not necessarily a count of discarded IP fragments since some algorithms can lose track of the number of fragments by combining them as they are received. fragf/s The number of IP datagrams that have been discarded per second because they needed to be fragmented at this entity but could not be, e.g., because their Don't Fragment flag was set [ipFragFails]. With the IP6 keyword, statistics about IPv6 network traffic are reported. Note that IPv6 statistics depend on sadc's option -S IPV6 to be collected. The following values are displayed (formal SNMP names between square brackets): irec6/s The total number of input datagrams received from interfaces per second, including those received in error [ipv6IfStatsInReceives]. fwddgm6/s The number of output datagrams per second which this entity received and forwarded to their final destinations [ipv6IfStatsOutForwDatagrams]. idel6/s The total number of datagrams successfully delivered per second to IPv6 user-protocols (including ICMP) [ipv6IfStatsInDelivers]. orq6/s The total number of IPv6 datagrams which local IPv6 user-protocols (including ICMP) supplied per second to IPv6 in requests for transmission [ipv6IfStatsOutRequests]. Note that this counter does not include any datagrams counted in fwddgm6/s. asmrq6/s The number of IPv6 fragments received per second which needed to be reassembled at this interface [ipv6IfStatsReasmReqds]. asmok6/s The number of IPv6 datagrams successfully reassembled per second [ipv6IfStatsReasmOKs]. imcpck6/s The number of multicast packets received per second by the interface [ipv6IfStatsInMcastPkts]. omcpck6/s The number of multicast packets transmitted per second by the interface [ipv6IfStatsOutMcastPkts]. fragok6/s The number of IPv6 datagrams that have been successfully fragmented at this output interface per second [ipv6IfStatsOutFragOKs]. fragcr6/s The number of output datagram fragments that have been generated per second as a result of fragmentation at this output interface [ipv6IfStatsOutFragCreates]. With the EIP6 keyword, statistics about IPv6 network errors are reported. Note that IPv6 statistics depend on sadc's option -S IPV6 to be collected. The following values are displayed (formal SNMP names between square brackets): ihdrer6/s The number of input datagrams discarded per second due to errors in their IPv6 headers, including version number mismatch, other format errors, hop count exceeded, errors discovered in processing their IPv6 options, etc. [ipv6IfStatsInHdrErrors] iadrer6/s The number of input datagrams discarded per second because the IPv6 address in their IPv6 header's destination field was not a valid address to be received at this entity. This count includes invalid addresses (e.g., ::0) and unsupported addresses (e.g., addresses with unallocated prefixes). For entities which are not IPv6 routers and therefore do not forward datagrams, this counter includes datagrams discarded because the destination address was not a local address [ipv6IfStatsInAddrErrors]. iukwnp6/s The number of locally-addressed datagrams received successfully but discarded per second because of an unknown or unsupported protocol [ipv6IfStatsInUnknownProtos]. i2big6/s The number of input datagrams that could not be forwarded per second because their size exceeded the link MTU of outgoing interface [ipv6IfStatsInTooBigErrors]. idisc6/s The number of input IPv6 datagrams per second for which no problems were encountered to prevent their continued processing, but which were discarded (e.g., for lack of buffer space) [ipv6IfStatsInDiscards]. Note that this counter does not include any datagrams discarded while awaiting re-assembly. odisc6/s The number of output IPv6 datagrams per second for which no problem was encountered to prevent their transmission to their destination, but which were discarded (e.g., for lack of buffer space) [ipv6IfStatsOutDiscards]. Note that this counter would include datagrams counted in fwddgm6/s if any such packets met this (discretionary) discard criterion. inort6/s The number of input datagrams discarded per second because no route could be found to transmit them to their destination [ipv6IfStatsInNoRoutes]. onort6/s The number of locally generated IP datagrams discarded per second because no route could be found to transmit them to their destination [unknown formal SNMP name]. asmf6/s The number of failures detected per second by the IPv6 re-assembly algorithm (for whatever reason: timed out, errors, etc.) [ipv6IfStatsReasmFails]. Note that this is not necessarily a count of discarded IPv6 fragments since some algorithms can lose track of the number of fragments by combining them as they are received. fragf6/s The number of IPv6 datagrams that have been discarded per second because they needed to be fragmented at this output interface but could not be [ipv6IfStatsOutFragFails]. itrpck6/s The number of input datagrams discarded per second because datagram frame didn't carry enough data [ipv6IfStatsInTruncatedPkts]. With the NFS keyword, statistics about NFS client activity are reported. The following values are displayed: call/s Number of RPC requests made per second. retrans/s Number of RPC requests per second, those which needed to be retransmitted (for example because of a server timeout). read/s Number of 'read' RPC calls made per second. write/s Number of 'write' RPC calls made per second. access/s Number of 'access' RPC calls made per second. getatt/s Number of 'getattr' RPC calls made per second. With the NFSD keyword, statistics about NFS server activity are reported. The following values are displayed: scall/s Number of RPC requests received per second. badcall/s Number of bad RPC requests received per second, those whose processing generated an error. packet/s Number of network packets received per second. udp/s Number of UDP packets received per second. tcp/s Number of TCP packets received per second. hit/s Number of reply cache hits per second. miss/s Number of reply cache misses per second. sread/s Number of 'read' RPC calls received per second. swrite/s Number of 'write' RPC calls received per second. saccess/s Number of 'access' RPC calls received per second. sgetatt/s Number of 'getattr' RPC calls received per second. With the SOCK keyword, statistics on sockets in use are reported (IPv4). The following values are displayed: totsck Total number of sockets used by the system. tcpsck Number of TCP sockets currently in use. udpsck Number of UDP sockets currently in use. rawsck Number of RAW sockets currently in use. ip-frag Number of IP fragments currently in queue. tcp-tw Number of TCP sockets in TIME_WAIT state. With the SOCK6 keyword, statistics on sockets in use are reported (IPv6). Note that IPv6 statistics depend on sadc's option -S IPV6 to be collected. The following values are displayed: tcp6sck Number of TCPv6 sockets currently in use. udp6sck Number of UDPv6 sockets currently in use. raw6sck Number of RAWv6 sockets currently in use. ip6-frag Number of IPv6 fragments currently in use. With the SOFT keyword, statistics about software-based network processing are reported. The following values are displayed: total/s The total number of network frames processed per second. dropd/s The total number of network frames dropped per second because there was no room on the processing queue. squeezd/s The number of times the softirq handler function terminated per second because its budget was consumed or the time limit was reached, but more work could have been done. rx_rps/s The number of times the CPU has been woken up per second to process packets via an inter-processor interrupt. flw_lim/s The number of times the flow limit has been reached per second. Flow limiting is an optional RPS feature that can be used to limit the number of packets queued to the backlog for each flow to a certain amount. This can help ensure that smaller flows are processed even though much larger flows are pushing packets in. blg_len The length of the network backlog. With the TCP keyword, statistics about TCPv4 network traffic are reported. Note that TCPv4 statistics depend on sadc's option -S SNMP to be collected. The following values are displayed (formal SNMP names between square brackets): active/s The number of times TCP connections have made a direct transition to the SYN-SENT state from the CLOSED state per second [tcpActiveOpens]. passive/s The number of times TCP connections have made a direct transition to the SYN-RCVD state from the LISTEN state per second [tcpPassiveOpens]. iseg/s The total number of segments received per second, including those received in error [tcpInSegs]. This count includes segments received on currently established connections. oseg/s The total number of segments sent per second, including those on current connections but excluding those containing only retransmitted octets [tcpOutSegs]. With the ETCP keyword, statistics about TCPv4 network errors are reported. Note that TCPv4 statistics depend on sadc's option -S SNMP to be collected. The following values are displayed (formal SNMP names between square brackets): atmptf/s The number of times per second TCP connections have made a direct transition to the CLOSED state from either the SYN-SENT state or the SYN-RCVD state, plus the number of times per second TCP connections have made a direct transition to the LISTEN state from the SYN-RCVD state [tcpAttemptFails]. estres/s The number of times per second TCP connections have made a direct transition to the CLOSED state from either the ESTABLISHED state or the CLOSE-WAIT state [tcpEstabResets]. retrans/s The total number of segments retransmitted per second - that is, the number of TCP segments transmitted containing one or more previously transmitted octets [tcpRetransSegs]. isegerr/s The total number of segments received in error (e.g., bad TCP checksums) per second [tcpInErrs]. orsts/s The number of TCP segments sent per second containing the RST flag [tcpOutRsts]. With the UDP keyword, statistics about UDPv4 network traffic are reported. Note that UDPv4 statistics depend on sadc's option -S SNMP to be collected. The following values are displayed (formal SNMP names between square brackets): idgm/s The total number of UDP datagrams delivered per second to UDP users [udpInDatagrams]. odgm/s The total number of UDP datagrams sent per second from this entity [udpOutDatagrams]. noport/s The total number of received UDP datagrams per second for which there was no application at the destination port [udpNoPorts]. idgmerr/s The number of received UDP datagrams per second that could not be delivered for reasons other than the lack of an application at the destination port [udpInErrors]. With the UDP6 keyword, statistics about UDPv6 network traffic are reported. Note that UDPv6 statistics depend on sadc's option -S IPV6 to be collected. The following values are displayed (formal SNMP names between square brackets): idgm6/s The total number of UDP datagrams delivered per second to UDP users [udpInDatagrams]. odgm6/s The total number of UDP datagrams sent per second from this entity [udpOutDatagrams]. noport6/s The total number of received UDP datagrams per second for which there was no application at the destination port [udpNoPorts]. idgmer6/s The number of received UDP datagrams per second that could not be delivered for reasons other than the lack of an application at the destination port [udpInErrors]. The ALL keyword is equivalent to specifying all the keywords above and therefore all the network activities are reported. -o [ filename ] Save the readings in the file in binary form. Each reading is in a separate record. The default value of the filename parameter is the current standard system activity daily data file. If filename is a directory instead of a plain file then it is considered as the directory where the standard system activity daily data files are located. Option -o is exclusive of option -f. All the data available from the kernel are saved in the file (in fact, sar calls its data collector sadc with the option -S ALL. See sadc(8) manual page). -P { cpu_list | ALL } Report per-processor statistics for the specified processor or processors. cpu_list is a list of comma- separated values or range of values (e.g., 0,2,4-7,12-). Note that processor 0 is the first processor, and processor all is the global average among all processors. Specifying the ALL keyword reports statistics for each individual processor, and globally for all processors. Offline processors are not displayed. -p, --pretty Make reports easier to read by a human. This option may be especially useful when displaying e.g., network interfaces or block devices statistics. -q [ keyword[,...] | ALL ] Report system load and pressure-stall statistics. Possible keywords are CPU, IO, LOAD, MEM and PSI. With the CPU keyword, CPU pressure statistics are reported. The following values are displayed: %scpu-10 Percentage of the time that at least some runnable tasks were delayed because the CPU was unavailable to them, over the last 10 second window. %scpu-60 Percentage of the time that at least some runnable tasks were delayed because the CPU was unavailable to them, over the last 60 second window. %scpu-300 Percentage of the time that at least some runnable tasks were delayed because the CPU was unavailable to them, over the last 300 second window. %scpu Percentage of the time that at least some runnable tasks were delayed because the CPU was unavailable to them, over the last time interval. With the IO keyword, I/O pressure statistics are reported. The following values are displayed: %sio-10 Percentage of the time that at least some tasks lost waiting for I/O, over the last 10 second window. %sio-60 Percentage of the time that at least some tasks lost waiting for I/O, over the last 60 second window. %sio-300 Percentage of the time that at least some tasks lost waiting for I/O, over the last 300 second window. %sio Percentage of the time that at least some tasks lost waiting for I/O, over the last time interval. %fio-10 Percentage of the time during which all non-idle tasks were stalled waiting for I/O, over the last 10 second window. %fio-60 Percentage of the time during which all non-idle tasks were stalled waiting for I/O, over the last 60 second window. %fio-300 Percentage of the time during which all non-idle tasks were stalled waiting for I/O, over the last 300 second window. %fio Percentage of the time during which all non-idle tasks were stalled waiting for I/O, over the last time interval. With the LOAD keyword, queue length and load averages statistics are reported. The following values are displayed: runq-sz Run queue length (number of tasks running or waiting for run time). plist-sz Number of tasks in the task list. ldavg-1 System load average for the last minute. The load average is calculated as the average number of runnable or running tasks (R state), and the number of tasks in uninterruptible sleep (D state) over the specified interval. ldavg-5 System load average for the past 5 minutes. ldavg-15 System load average for the past 15 minutes. blocked Number of tasks currently blocked, waiting for I/O to complete. With the MEM keyword, memory pressure statistics are reported. The following values are displayed: %smem-10 Percentage of the time during which at least some tasks were waiting for memory resources, over the last 10 second window. %smem-60 Percentage of the time during which at least some tasks were waiting for memory resources, over the last 60 second window. %smem-300 Percentage of the time during which at least some tasks were waiting for memory resources, over the last 300 second window. %smem Percentage of the time during which at least some tasks were waiting for memory resources, over the last time interval. %fmem-10 Percentage of the time during which all non-idle tasks were stalled waiting for memory resources, over the last 10 second window. %fmem-60 Percentage of the time during which all non-idle tasks were stalled waiting for memory resources, over the last 60 second window. %fmem-300 Percentage of the time during which all non-idle tasks were stalled waiting for memory resources, over the last 300 second window. %fmem Percentage of the time during which all non-idle tasks were stalled waiting for memory resources, over the last time interval. The PSI keyword is equivalent to specifying CPU, IO and MEM keywords together and therefore all the pressure-stall statistics are reported. The ALL keyword is equivalent to specifying all the keywords above and therefore all the statistics are reported. -r [ ALL ] Report memory utilization statistics. The ALL keyword indicates that all the memory fields should be displayed. The following values may be displayed: kbmemfree Amount of free memory available in kilobytes. kbavail Estimate of how much memory in kilobytes is available for starting new applications, without swapping. The estimate takes into account that the system needs some page cache to function well, and that not all reclaimable slab will be reclaimable, due to items being in use. The impact of those factors will vary from system to system. kbmemused Amount of used memory in kilobytes (calculated as total installed memory - kbmemfree - kbbuffers - kbcached - kbslab). %memused Percentage of used memory. kbbuffers Amount of memory used as buffers by the kernel in kilobytes. kbcached Amount of memory used to cache data by the kernel in kilobytes. kbcommit Amount of memory in kilobytes needed for current workload. This is an estimate of how much RAM/swap is needed to guarantee that there never is out of memory. %commit Percentage of memory needed for current workload in relation to the total amount of memory (RAM+swap). This number may be greater than 100% because the kernel usually overcommits memory. kbactive Amount of active memory in kilobytes (memory that has been used more recently and usually not reclaimed unless absolutely necessary). kbinact Amount of inactive memory in kilobytes (memory which has been less recently used. It is more eligible to be reclaimed for other purposes). kbdirty Amount of memory in kilobytes waiting to get written back to the disk. kbanonpg Amount of non-file backed pages in kilobytes mapped into userspace page tables. kbslab Amount of memory in kilobytes used by the kernel to cache data structures for its own use. kbkstack Amount of memory in kilobytes used for kernel stack space. kbpgtbl Amount of memory in kilobytes dedicated to the lowest level of page tables. kbvmused Amount of memory in kilobytes of used virtual address space. -S Report swap space utilization statistics. The following values are displayed: kbswpfree Amount of free swap space in kilobytes. kbswpused Amount of used swap space in kilobytes. %swpused Percentage of used swap space. kbswpcad Amount of cached swap memory in kilobytes. This is memory that once was swapped out, is swapped back in but still also is in the swap area (if memory is needed it doesn't need to be swapped out again because it is already in the swap area. This saves I/O). %swpcad Percentage of cached swap memory in relation to the amount of used swap space. -s [ hh:mm[:ss] ] -s [ seconds_since_the_epoch ] Set the starting time of the data, causing the sar command to extract records time-tagged at, or following, the time specified. The default starting time is 08:00:00. Hours must be given in 24-hour format, or as the number of seconds since the epoch (given as a 10 digit number). This option can be used only when data are read from a file (option -f). --sadc Indicate which data collector is called by sar. If the data collector is sought in PATH then enter "which sadc" to know where it is located. -t When reading data from a daily data file, indicate that sar should display the timestamps in the original local time of the data file creator. Without this option, the sar command displays the timestamps in the user's local time. -u [ ALL ] Report CPU utilization. The ALL keyword indicates that all the CPU fields should be displayed. The report may show the following fields: %user Percentage of CPU utilization that occurred while executing at the user level (application). Note that this field includes time spent running virtual processors. %usr Percentage of CPU utilization that occurred while executing at the user level (application). Note that this field does NOT include time spent running virtual processors. %nice Percentage of CPU utilization that occurred while executing at the user level with nice priority. %system Percentage of CPU utilization that occurred while executing at the system level (kernel). Note that this field includes time spent servicing hardware and software interrupts. %sys Percentage of CPU utilization that occurred while executing at the system level (kernel). Note that this field does NOT include time spent servicing hardware or software interrupts. %iowait Percentage of time that the CPU or CPUs were idle during which the system had an outstanding disk I/O request. %steal Percentage of time spent in involuntary wait by the virtual CPU or CPUs while the hypervisor was servicing another virtual processor. %irq Percentage of time spent by the CPU or CPUs to service hardware interrupts. %soft Percentage of time spent by the CPU or CPUs to service software interrupts. %guest Percentage of time spent by the CPU or CPUs to run a virtual processor. %gnice Percentage of time spent by the CPU or CPUs to run a niced guest. %idle Percentage of time that the CPU or CPUs were idle and the system did not have an outstanding disk I/O request. -V Print version number then exit. -v Report status of inode, file and other kernel tables. The following values are displayed: dentunusd Number of unused cache entries in the directory cache. file-nr Number of file handles used by the system. inode-nr Number of inode handlers used by the system. pty-nr Number of pseudo-terminals used by the system. -W Report swapping statistics. The following values are displayed: pswpin/s Total number of swap pages the system brought in per second. pswpout/s Total number of swap pages the system brought out per second. -w Report task creation and system switching activity. The following values are displayed: proc/s Total number of tasks created per second. cswch/s Total number of context switches per second. -x Extended reports: Display minimum and maximum values in addition to average ones at the end of the report. -y Report TTY devices activity. The following values are displayed: rcvin/s Number of receive interrupts per second for current serial line. Serial line number is given in the TTY column. xmtin/s Number of transmit interrupts per second for current serial line. framerr/s Number of frame errors per second for current serial line. prtyerr/s Number of parity errors per second for current serial line. brk/s Number of breaks per second for current serial line. ovrun/s Number of overrun errors per second for current serial line. -z Tell sar to omit output for any devices for which there was no activity during the sample period.
# sar > Monitor performance of various Linux subsystems. More information: > https://manned.org/sar. * Report I/O and transfer rate issued to physical devices, one per second (press CTRL+C to quit): `sar -b {{1}}` * Report a total of 10 network device statistics, one per 2 seconds: `sar -n DEV {{2}} {{10}}` * Report CPU utilization, one per 2 seconds: `sar -u ALL {{2}}` * Report a total of 20 memory utilization statistics, one per second: `sar -r ALL {{1}} {{20}}` * Report the run queue length and load averages, one per second: `sar -q {{1}}` * Report paging statistics, one per 5 seconds: `sar -B {{5}}`
rename
rename will rename the specified files by replacing the first occurrence of expression in their name by replacement. -s, --symlink Do not rename a symlink but change where it points. -v, --verbose Show which files were renamed, if any. -n, --no-act Do not make any changes; add --verbose to see what would be made. -a, --all Replace all occurrences of expression rather than only the first one. -l, --last Replace the last occurrence of expression rather than the first one. -o, --no-overwrite Do not overwrite existing files. When --symlink is active, do not overwrite symlinks pointing to existing targets. -i, --interactive Ask before overwriting existing files. -h, --help Display help text and exit. -V, --version Print version and exit.
# rename > Rename a file or group of files with a regular expression. More information: > https://www.manpagez.com/man/2/rename/. * Replace `from` with `to` in the filenames of the specified files: `rename 's/{{from}}/{{to}}/' {{*.txt}}`
strip
GNU strip discards all symbols from object files objfile. The list of object files may include archives. At least one object file must be given. strip modifies the files named in its argument, rather than writing modified copies under different names. -F bfdname --target=bfdname Treat the original objfile as a file with the object code format bfdname, and rewrite it in the same format. --help Show a summary of the options to strip and exit. --info Display a list showing all architectures and object formats available. -I bfdname --input-target=bfdname Treat the original objfile as a file with the object code format bfdname. -O bfdname --output-target=bfdname Replace objfile with a file in the output format bfdname. -R sectionname --remove-section=sectionname Remove any section named sectionname from the output file, in addition to whatever sections would otherwise be removed. This option may be given more than once. Note that using this option inappropriately may make the output file unusable. The wildcard character * may be given at the end of sectionname. If so, then any section starting with sectionname will be removed. If the first character of sectionpattern is the exclamation point (!) then matching sections will not be removed even if an earlier use of --remove-section on the same command line would otherwise remove it. For example: --remove-section=.text.* --remove-section=!.text.foo will remove all sections matching the pattern '.text.*', but will not remove the section '.text.foo'. --keep-section=sectionpattern When removing sections from the output file, keep sections that match sectionpattern. --remove-relocations=sectionpattern Remove relocations from the output file for any section matching sectionpattern. This option may be given more than once. Note that using this option inappropriately may make the output file unusable. Wildcard characters are accepted in sectionpattern. For example: --remove-relocations=.text.* will remove the relocations for all sections matching the patter '.text.*'. If the first character of sectionpattern is the exclamation point (!) then matching sections will not have their relocation removed even if an earlier use of --remove-relocations on the same command line would otherwise cause the relocations to be removed. For example: --remove-relocations=.text.* --remove-relocations=!.text.foo will remove all relocations for sections matching the pattern '.text.*', but will not remove relocations for the section '.text.foo'. -s --strip-all Remove all symbols. -g -S -d --strip-debug Remove debugging symbols only. --strip-dwo Remove the contents of all DWARF .dwo sections, leaving the remaining debugging sections and all symbols intact. See the description of this option in the objcopy section for more information. --strip-unneeded Remove all symbols that are not needed for relocation processing in addition to debugging symbols and sections stripped by --strip-debug. -K symbolname --keep-symbol=symbolname When stripping symbols, keep symbol symbolname even if it would normally be stripped. This option may be given more than once. -M --merge-notes --no-merge-notes For ELF files, attempt (or do not attempt) to reduce the size of any SHT_NOTE type sections by removing duplicate notes. The default is to attempt this reduction unless stripping debug or DWO information. -N symbolname --strip-symbol=symbolname Remove symbol symbolname from the source file. This option may be given more than once, and may be combined with strip options other than -K. -o file Put the stripped output in file, rather than replacing the existing file. When this argument is used, only one objfile argument may be specified. -p --preserve-dates Preserve the access and modification dates of the file. -D --enable-deterministic-archives Operate in deterministic mode. When copying archive members and writing the archive index, use zero for UIDs, GIDs, timestamps, and use consistent file modes for all files. If binutils was configured with --enable-deterministic-archives, then this mode is on by default. It can be disabled with the -U option, below. -U --disable-deterministic-archives Do not operate in deterministic mode. This is the inverse of the -D option, above: when copying archive members and writing the archive index, use their actual UID, GID, timestamp, and file mode values. This is the default unless binutils was configured with --enable-deterministic-archives. -w --wildcard Permit regular expressions in symbolnames used in other command line options. The question mark (?), asterisk (*), backslash (\) and square brackets ([]) operators can be used anywhere in the symbol name. If the first character of the symbol name is the exclamation point (!) then the sense of the switch is reversed for that symbol. For example: -w -K !foo -K fo* would cause strip to only keep symbols that start with the letters "fo", but to discard the symbol "foo". -x --discard-all Remove non-global symbols. -X --discard-locals Remove compiler-generated local symbols. (These usually start with L or ..) --keep-section-symbols When stripping a file, perhaps with --strip-debug or --strip-unneeded, retain any symbols specifying section names, which would otherwise get stripped. --keep-file-symbols When stripping a file, perhaps with --strip-debug or --strip-unneeded, retain any symbols specifying source file names, which would otherwise get stripped. --only-keep-debug Strip a file, emptying the contents of any sections that would not be stripped by --strip-debug and leaving the debugging sections intact. In ELF files, this preserves all the note sections in the output as well. Note - the section headers of the stripped sections are preserved, including their sizes, but the contents of the section are discarded. The section headers are preserved so that other tools can match up the debuginfo file with the real executable, even if that executable has been relocated to a different address space. The intention is that this option will be used in conjunction with --add-gnu-debuglink to create a two part executable. One a stripped binary which will occupy less space in RAM and in a distribution and the second a debugging information file which is only needed if debugging abilities are required. The suggested procedure to create these files is as follows: 1.<Link the executable as normal. Assuming that it is called> "foo" then... 1.<Run "objcopy --only-keep-debug foo foo.dbg" to> create a file containing the debugging info. 1.<Run "objcopy --strip-debug foo" to create a> stripped executable. 1.<Run "objcopy --add-gnu-debuglink=foo.dbg foo"> to add a link to the debugging info into the stripped executable. Note---the choice of ".dbg" as an extension for the debug info file is arbitrary. Also the "--only-keep-debug" step is optional. You could instead do this: 1.<Link the executable as normal.> 1.<Copy "foo" to "foo.full"> 1.<Run "strip --strip-debug foo"> 1.<Run "objcopy --add-gnu-debuglink=foo.full foo"> i.e., the file pointed to by the --add-gnu-debuglink can be the full executable. It does not have to be a file created by the --only-keep-debug switch. Note---this switch is only intended for use on fully linked files. It does not make sense to use it on object files where the debugging information may be incomplete. Besides the gnu_debuglink feature currently only supports the presence of one filename containing debugging information, not multiple filenames on a one-per-object-file basis. -V --version Show the version number for strip. -v --verbose Verbose output: list all object files modified. In the case of archives, strip -v lists all members of the archive. @file Read command-line options from file. The options read are inserted in place of the original @file option. If file does not exist, or cannot be read, then the option will be treated literally, and not removed. Options in file are separated by whitespace. A whitespace character may be included in an option by surrounding the entire option in either single or double quotes. Any character (including a backslash) may be included by prefixing the character to be included with a backslash. The file may itself contain additional @file options; any such options will be processed recursively.
# strip > Discard symbols from executables or object files. More information: > https://manned.org/strip. * Replace the input file with its stripped version: `strip {{path/to/file}}` * Strip symbols from a file, saving the output to a specific file: `strip {{path/to/input_file}} -o {{path/to/output_file}}` * Strip debug symbols only: `strip --strip-debug {{path/to/file.o}}`
head
The head utility shall copy its input files to the standard output, ending the output for each file at a designated point. Copying shall end at the point in each input file indicated by the -n number option. The option-argument number shall be counted in units of lines. The head utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following option shall be supported: -n number The first number lines of each input file shall be copied to standard output. The application shall ensure that the number option-argument is a positive decimal integer. When a file contains less than number lines, it shall be copied to standard output in its entirety. This shall not be an error. If no options are specified, head shall act as if -n 10 had been specified.
# head > Output the first part of files. More information: > https://keith.github.io/xcode-man-pages/head.1.html. * Output the first few lines of a file: `head --lines {{8}} {{path/to/file}}` * Output the first few bytes of a file: `head --bytes {{8}} {{path/to/file}}` * Output everything but the last few lines of a file: `head --lines -{{8}} {{path/to/file}}` * Output everything but the last few bytes of a file: `head --bytes -{{8}} {{path/to/file}}`
wall
wall displays a message, or the contents of a file, or otherwise its standard input, on the terminals of all currently logged in users. The command will wrap lines that are longer than 79 characters. Short lines are whitespace padded to have 79 characters. The command will always put a carriage return and new line at the end of each line. Only the superuser can write on the terminals of users who have chosen to deny messages or are using a program which automatically denies messages. Reading from a file is refused when the invoker is not superuser and the program is set-user-ID or set-group-ID. -n, --nobanner Suppress the banner. -t, --timeout timeout Abandon the write attempt to the terminals after timeout seconds. This timeout must be a positive integer. The default value is 300 seconds, which is a legacy from the time when people ran terminals over modem lines. -g, --group group Limit printing message to members of group defined as a group argument. The argument can be group name or GID. -h, --help Display help text and exit. -V, --version Print version and exit.
# wall > Write a message on the terminals of users currently logged in. More > information: https://manned.org/wall. * Send a message: `wall {{message}}` * Send a message to users that belong to a specific group: `wall --group {{group_name}} {{message}}` * Send a message from a file: `wall {{file}}` * Send a message with timeout (default 300): `wall --timeout {{seconds}} {{file}}`
stat
Display file or file system status. Mandatory arguments to long options are mandatory for short options too. -L, --dereference follow links -f, --file-system display file system status instead of file status --cached=MODE specify how to use cached attributes; useful on remote file systems. See MODE below -c --format=FORMAT use the specified FORMAT instead of the default; output a newline after each use of FORMAT --printf=FORMAT like --format, but interpret backslash escapes, and do not output a mandatory trailing newline; if you want a newline, include \n in FORMAT -t, --terse print the information in terse form --help display this help and exit --version output version information and exit The MODE argument of --cached can be: always, never, or default. 'always' will use cached attributes if available, while 'never' will try to synchronize with the latest attributes, and 'default' will leave it up to the underlying file system. The valid format sequences for files (without --file-system): %a permission bits in octal (note '#' and '0' printf flags) %A permission bits and file type in human readable form %b number of blocks allocated (see %B) %B the size in bytes of each block reported by %b %C SELinux security context string %d device number in decimal (st_dev) %D device number in hex (st_dev) %Hd major device number in decimal %Ld minor device number in decimal %f raw mode in hex %F file type %g group ID of owner %G group name of owner %h number of hard links %i inode number %m mount point %n file name %N quoted file name with dereference if symbolic link %o optimal I/O transfer size hint %s total size, in bytes %r device type in decimal (st_rdev) %R device type in hex (st_rdev) %Hr major device type in decimal, for character/block device special files %Lr minor device type in decimal, for character/block device special files %t major device type in hex, for character/block device special files %T minor device type in hex, for character/block device special files %u user ID of owner %U user name of owner %w time of file birth, human-readable; - if unknown %W time of file birth, seconds since Epoch; 0 if unknown %x time of last access, human-readable %X time of last access, seconds since Epoch %y time of last data modification, human-readable %Y time of last data modification, seconds since Epoch %z time of last status change, human-readable %Z time of last status change, seconds since Epoch Valid format sequences for file systems: %a free blocks available to non-superuser %b total data blocks in file system %c total file nodes in file system %d free file nodes in file system %f free blocks in file system %i file system ID in hex %l maximum length of filenames %n file name %s block size (for faster transfers) %S fundamental block size (for block counts) %t file system type in hex %T file system type in human readable form --terse is equivalent to the following FORMAT: %n %s %b %f %u %g %D %i %h %t %T %X %Y %Z %W %o %C --terse --file-system is equivalent to the following FORMAT: %n %i %l %t %s %S %b %f %a %c %d NOTE: your shell may have its own version of stat, which usually supersedes the version described here. Please refer to your shell's documentation for details about the options it supports.
# stat > Display file status. More information: https://ss64.com/osx/stat.html. * Show file properties such as size, permissions, creation and access dates among others: `stat {{path/to/file}}` * Same as above but verbose (more similar to Linux's `stat`): `stat -x {{path/to/file}}` * Show only octal file permissions: `stat -f %Mp%Lp {{path/to/file}}` * Show owner and group of the file: `stat -f "%Su %Sg" {{path/to/file}}` * Show the size of the file in bytes: `stat -f "%z %N" {{path/to/file}}`
ar
The ar utility is part of the Software Development Utilities option. The ar utility can be used to create and maintain groups of files combined into an archive. Once an archive has been created, new files can be added, and existing files in an archive can be extracted, deleted, or replaced. When an archive consists entirely of valid object files, the implementation shall format the archive so that it is usable as a library for link editing (see c99 and fort77). When some of the archived files are not valid object files, the suitability of the archive for library use is undefined. If an archive consists entirely of printable files, the entire archive shall be printable. When ar creates an archive, it creates administrative information indicating whether a symbol table is present in the archive. When there is at least one object file that ar recognizes as such in the archive, an archive symbol table shall be created in the archive and maintained by ar; it is used by the link editor to search the archive. Whenever the ar utility is used to create or update the contents of such an archive, the symbol table shall be rebuilt. The -s option shall force the symbol table to be rebuilt. All file operands can be pathnames. However, files within archives shall be named by a filename, which is the last component of the pathname used when the file was entered into the archive. The comparison of file operands to the names of files in archives shall be performed by comparing the last component of the operand to the name of the file in the archive. It is unspecified whether multiple files in the archive may be identically named. In the case of such files, however, each file and posname operand shall match only the first file in the archive having a name that is the same as the last component of the operand. The ar utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines, except for Guideline 9. The following options shall be supported: -a Position new files in the archive after the file named by the posname operand. -b Position new files in the archive before the file named by the posname operand. -c Suppress the diagnostic message that is written to standard error by default when the archive archive is created. -C Prevent extracted files from replacing like-named files in the file system. This option is useful when -T is also used, to prevent truncated filenames from replacing files with the same prefix. -d Delete one or more files from archive. -i Position new files in the archive before the file in the archive named by the posname operand (equivalent to -b). -m Move the named files in the archive. The -a, -b, or -i options with the posname operand indicate the position; otherwise, move the names files in the archive to the end of the archive. -p Write the contents of the files in the archive named by file operands from archive to the standard output. If no file operands are specified, the contents of all files in the archive shall be written in the order of the archive. -q Append the named files to the end of the archive. In this case ar does not check whether the added files are already in the archive. This is useful to bypass the searching otherwise done when creating a large archive piece by piece. -r Replace or add files to archive. If the archive named by archive does not exist, a new archive shall be created and a diagnostic message shall be written to standard error (unless the -c option is specified). If no files are specified and the archive exists, the results are undefined. Files that replace existing files in the archive shall not change the order of the archive. Files that do not replace existing files in the archive shall be appended to the archive unless a -a, -b, or -i option specifies another position. -s Force the regeneration of the archive symbol table even if ar is not invoked with an option that modifies the archive contents. This option is useful to restore the archive symbol table after it has been stripped; see strip. -t Write a table of contents of archive to the standard output. Only the files specified by the file operands shall be included in the written list. If no file operands are specified, all files in archive shall be included in the order of the archive. -T Allow filename truncation of extracted files whose archive names are longer than the file system can support. By default, extracting a file with a name that is too long shall be an error; a diagnostic message shall be written and the file shall not be extracted. -u Update older files in the archive. When used with the -r option, files in the archive shall be replaced only if the corresponding file has a modification time that is at least as new as the modification time of the file in the archive. -v Give verbose output. When used with the option characters -d, -r, or -x, write a detailed file-by-file description of the archive creation and maintenance activity, as described in the STDOUT section. When used with -p, write the name of the file in the archive to the standard output before writing the file in the archive itself to the standard output, as described in the STDOUT section. When used with -t, include a long listing of information about the files in the archive, as described in the STDOUT section. -x Extract the files in the archive named by the file operands from archive. The contents of the archive shall not be changed. If no file operands are given, all files in the archive shall be extracted. The modification time of each file extracted shall be set to the time the file is extracted from the archive.
# ar > Create, modify, and extract from Unix archives. Typically used for static > libraries (`.a`) and Debian packages (`.deb`). See also: `tar`. More > information: https://manned.org/ar. * E[x]tract all members from an archive: `ar x {{path/to/file.a}}` * Lis[t] contents in a specific archive: `ar t {{path/to/file.ar}}` * [r]eplace or add specific files to an archive: `ar r {{path/to/file.deb}} {{path/to/debian-binary path/to/control.tar.gz path/to/data.tar.xz ...}}` * In[s]ert an object file index (equivalent to using `ranlib`): `ar s {{path/to/file.a}}` * Create an archive with specific files and an accompanying object file index: `ar rs {{path/to/file.a}} {{path/to/file1.o path/to/file2.o ...}}`
git
Git is a fast, scalable, distributed revision control system with an unusually rich command set that provides both high-level operations and full access to internals. See gittutorial(7) to get started, then see giteveryday(7) for a useful minimum set of commands. The Git User’s Manual[1] has a more in-depth introduction. After you mastered the basic concepts, you can come back to this page to learn what commands Git offers. You can learn more about individual Git commands with "git help command". gitcli(7) manual page gives you an overview of the command-line command syntax. A formatted and hyperlinked copy of the latest Git documentation can be viewed at https://git.github.io/htmldocs/git.html or https://git-scm.com/docs . -v, --version Prints the Git suite version that the git program came from. This option is internally converted to git version ... and accepts the same options as the git-version(1) command. If --help is also given, it takes precedence over --version. -h, --help Prints the synopsis and a list of the most commonly used commands. If the option --all or -a is given then all available commands are printed. If a Git command is named this option will bring up the manual page for that command. Other options are available to control how the manual page is displayed. See git-help(1) for more information, because git --help ... is converted internally into git help .... -C <path> Run as if git was started in <path> instead of the current working directory. When multiple -C options are given, each subsequent non-absolute -C <path> is interpreted relative to the preceding -C <path>. If <path> is present but empty, e.g. -C "", then the current working directory is left unchanged. This option affects options that expect path name like --git-dir and --work-tree in that their interpretations of the path names would be made relative to the working directory caused by the -C option. For example the following invocations are equivalent: git --git-dir=a.git --work-tree=b -C c status git --git-dir=c/a.git --work-tree=c/b status -c <name>=<value> Pass a configuration parameter to the command. The value given will override values from configuration files. The <name> is expected in the same format as listed by git config (subkeys separated by dots). Note that omitting the = in git -c foo.bar ... is allowed and sets foo.bar to the boolean true value (just like [foo]bar would in a config file). Including the equals but with an empty value (like git -c foo.bar= ...) sets foo.bar to the empty string which git config --type=bool will convert to false. --config-env=<name>=<envvar> Like -c <name>=<value>, give configuration variable <name> a value, where <envvar> is the name of an environment variable from which to retrieve the value. Unlike -c there is no shortcut for directly setting the value to an empty string, instead the environment variable itself must be set to the empty string. It is an error if the <envvar> does not exist in the environment. <envvar> may not contain an equals sign to avoid ambiguity with <name> containing one. This is useful for cases where you want to pass transitory configuration options to git, but are doing so on OS’s where other processes might be able to read your cmdline (e.g. /proc/self/cmdline), but not your environ (e.g. /proc/self/environ). That behavior is the default on Linux, but may not be on your system. Note that this might add security for variables such as http.extraHeader where the sensitive information is part of the value, but not e.g. url.<base>.insteadOf where the sensitive information can be part of the key. --exec-path[=<path>] Path to wherever your core Git programs are installed. This can also be controlled by setting the GIT_EXEC_PATH environment variable. If no path is given, git will print the current setting and then exit. --html-path Print the path, without trailing slash, where Git’s HTML documentation is installed and exit. --man-path Print the manpath (see man(1)) for the man pages for this version of Git and exit. --info-path Print the path where the Info files documenting this version of Git are installed and exit. -p, --paginate Pipe all output into less (or if set, $PAGER) if standard output is a terminal. This overrides the pager.<cmd> configuration options (see the "Configuration Mechanism" section below). -P, --no-pager Do not pipe Git output into a pager. --git-dir=<path> Set the path to the repository (".git" directory). This can also be controlled by setting the GIT_DIR environment variable. It can be an absolute path or relative path to current working directory. Specifying the location of the ".git" directory using this option (or GIT_DIR environment variable) turns off the repository discovery that tries to find a directory with ".git" subdirectory (which is how the repository and the top-level of the working tree are discovered), and tells Git that you are at the top level of the working tree. If you are not at the top-level directory of the working tree, you should tell Git where the top-level of the working tree is, with the --work-tree=<path> option (or GIT_WORK_TREE environment variable) If you just want to run git as if it was started in <path> then use git -C <path>. --work-tree=<path> Set the path to the working tree. It can be an absolute path or a path relative to the current working directory. This can also be controlled by setting the GIT_WORK_TREE environment variable and the core.worktree configuration variable (see core.worktree in git-config(1) for a more detailed discussion). --namespace=<path> Set the Git namespace. See gitnamespaces(7) for more details. Equivalent to setting the GIT_NAMESPACE environment variable. --bare Treat the repository as a bare repository. If GIT_DIR environment is not set, it is set to the current working directory. --no-replace-objects Do not use replacement refs to replace Git objects. See git-replace(1) for more information. --literal-pathspecs Treat pathspecs literally (i.e. no globbing, no pathspec magic). This is equivalent to setting the GIT_LITERAL_PATHSPECS environment variable to 1. --glob-pathspecs Add "glob" magic to all pathspec. This is equivalent to setting the GIT_GLOB_PATHSPECS environment variable to 1. Disabling globbing on individual pathspecs can be done using pathspec magic ":(literal)" --noglob-pathspecs Add "literal" magic to all pathspec. This is equivalent to setting the GIT_NOGLOB_PATHSPECS environment variable to 1. Enabling globbing on individual pathspecs can be done using pathspec magic ":(glob)" --icase-pathspecs Add "icase" magic to all pathspec. This is equivalent to setting the GIT_ICASE_PATHSPECS environment variable to 1. --no-optional-locks Do not perform optional operations that require locks. This is equivalent to setting the GIT_OPTIONAL_LOCKS to 0. --list-cmds=group[,group...] List commands by group. This is an internal/experimental option and may change or be removed in the future. Supported groups are: builtins, parseopt (builtin commands that use parse-options), main (all commands in libexec directory), others (all other commands in $PATH that have git- prefix), list-<category> (see categories in command-list.txt), nohelpers (exclude helper commands), alias and config (retrieve command list from config variable completion.commands) --attr-source=<tree-ish> Read gitattributes from <tree-ish> instead of the worktree. See gitattributes(5). This is equivalent to setting the GIT_ATTR_SOURCE environment variable.
# git > Distributed version control system. Some subcommands such as `commit`, > `add`, `branch`, `checkout`, `push`, etc. have their own usage > documentation, accessible via `tldr git subcommand`. More information: > https://git-scm.com/. * Check the Git version: `git --version` * Show general help: `git --help` * Show help on a Git subcommand (like `clone`, `add`, `push`, `log`, etc.): `git help {{subcommand}}` * Execute a Git subcommand: `git {{subcommand}}` * Execute a Git subcommand on a custom repository root path: `git -C {{path/to/repo}} {{subcommand}}` * Execute a Git subcommand with a given configuration set: `git -c '{{config.key}}={{value}}' {{subcommand}}`
printenv
Print the values of the specified environment VARIABLE(s). If no VARIABLE is specified, print name and value pairs for them all. -0, --null end each output line with NUL, not newline --help display this help and exit --version output version information and exit NOTE: your shell may have its own version of printenv, which usually supersedes the version described here. Please refer to your shell's documentation for details about the options it supports.
# printenv > Print values of all or specific environment variables. More information: > https://www.gnu.org/software/coreutils/printenv. * Display key-value pairs of all environment variables: `printenv` * Display the value of a specific variable: `printenv {{HOME}}` * Display the value of a variable and end with NUL instead of newline: `printenv --null {{HOME}}`
chsh
The chsh command changes the user login shell. This determines the name of the user's initial login command. A normal user may only change the login shell for her own account; the superuser may change the login shell for any account. The options which apply to the chsh command are: -h, --help Display help message and exit. -R, --root CHROOT_DIR Apply changes in the CHROOT_DIR directory and use the configuration files from the CHROOT_DIR directory. Only absolute paths are supported. -s, --shell SHELL The name of the user's new login shell. Setting this field to blank causes the system to select the default login shell. If the -s option is not selected, chsh operates in an interactive fashion, prompting the user with the current login shell. Enter the new value to change the shell, or leave the line blank to use the current one. The current shell is displayed between a pair of [ ] marks.
# chsh > Change user's login shell. More information: https://manned.org/chsh. * Set a specific login shell for the current user interactively: `chsh` * Set a specific login [s]hell for the current user: `chsh -s {{path/to/shell}}` * Set a login [s]hell for a specific user: `chsh -s {{path/to/shell}} {{username}}` * [l]ist available shells: `chsh -l`
pax
The pax utility shall read, write, and write lists of the members of archive files and copy directory hierarchies. A variety of archive formats shall be supported; see the -x format option. The action to be taken depends on the presence of the -r and -w options. The four combinations of -r and -w are referred to as the four modes of operation: list, read, write, and copy modes, corresponding respectively to the four forms shown in the SYNOPSIS section. list In list mode (when neither -r nor -w are specified), pax shall write the names of the members of the archive file read from the standard input, with pathnames matching the specified patterns, to standard output. If a named file is of type directory, the file hierarchy rooted at that file shall be listed as well. read In read mode (when -r is specified, but -w is not), pax shall extract the members of the archive file read from the standard input, with pathnames matching the specified patterns. If an extracted file is of type directory, the file hierarchy rooted at that file shall be extracted as well. The extracted files shall be created performing pathname resolution with the directory in which pax was invoked as the current working directory. If an attempt is made to extract a directory when the directory already exists, this shall not be considered an error. If an attempt is made to extract a FIFO when the FIFO already exists, this shall not be considered an error. The ownership, access, and modification times, and file mode of the restored files are discussed under the -p option. write In write mode (when -w is specified, but -r is not), pax shall write the contents of the file operands to the standard output in an archive format. If no file operands are specified, a list of files to copy, one per line, shall be read from the standard input and each entry in this list shall be processed as if it had been a file operand on the command line. A file of type directory shall include all of the files in the file hierarchy rooted at the file. copy In copy mode (when both -r and -w are specified), pax shall copy the file operands to the destination directory. If no file operands are specified, a list of files to copy, one per line, shall be read from the standard input. A file of type directory shall include all of the files in the file hierarchy rooted at the file. The effect of the copy shall be as if the copied files were written to a pax format archive file and then subsequently extracted, except that copying of sockets may be supported even if archiving them in write mode is not supported, and that there may be hard links between the original and the copied files. If the destination directory is a subdirectory of one of the files to be copied, the results are unspecified. If the destination directory is a file of a type not defined by the System Interfaces volume of POSIX.1‐2017, the results are implementation-defined; otherwise, it shall be an error for the file named by the directory operand not to exist, not be writable by the user, or not be a file of type directory. In read or copy modes, if intermediate directories are necessary to extract an archive member, pax shall perform actions equivalent to the mkdir() function defined in the System Interfaces volume of POSIX.1‐2017, called with the following arguments: * The intermediate directory used as the path argument * The value of the bitwise-inclusive OR of S_IRWXU, S_IRWXG, and S_IRWXO as the mode argument If any specified pattern or file operands are not matched by at least one file or archive member, pax shall write a diagnostic message to standard error for each one that did not match and exit with a non-zero exit status. The archive formats described in the EXTENDED DESCRIPTION section shall be automatically detected on input. The default output archive format shall be implementation-defined. A single archive can span multiple files. The pax utility shall determine, in an implementation-defined manner, what file to read or write as the next file. If the selected archive format supports the specification of linked files, it shall be an error if these files cannot be linked when the archive is extracted. For archive formats that do not store file contents with each name that causes a hard link, if the file that contains the data is not extracted during this pax session, either the data shall be restored from the original file, or a diagnostic message shall be displayed with the name of a file that can be used to extract the data. In traversing directories, pax shall detect infinite loops; that is, entering a previously visited directory that is an ancestor of the last file visited. When it detects an infinite loop, pax shall write a diagnostic message to standard error and shall terminate. The pax utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines, except that the order of presentation of the -o, -p, and -s options is significant. The following options shall be supported: -r Read an archive file from standard input. -w Write files to the standard output in the specified archive format. -a Append files to the end of the archive. It is implementation-defined which devices on the system support appending. Additional file formats unspecified by this volume of POSIX.1‐2017 may impose restrictions on appending. -b blocksize Block the output at a positive decimal integer number of bytes per write to the archive file. Devices and archive formats may impose restrictions on blocking. Blocking shall be automatically determined on input. Conforming applications shall not specify a blocksize value larger than 32256. Default blocking when creating archives depends on the archive format. (See the -x option below.) -c Match all file or archive members except those specified by the pattern or file operands. -d Cause files of type directory being copied or archived or archive members of type directory being extracted or listed to match only the file or archive member itself and not the file hierarchy rooted at the file. -f archive Specify the pathname of the input or output archive, overriding the default standard input (in list or read modes) or standard output (write mode). -H If a symbolic link referencing a file of type directory is specified on the command line, pax shall archive the file hierarchy rooted in the file referenced by the link, using the name of the link as the root of the file hierarchy. Otherwise, if a symbolic link referencing a file of any other file type which pax can normally archive is specified on the command line, then pax shall archive the file referenced by the link, using the name of the link. The default behavior, when neither -H or -L are specified, shall be to archive the symbolic link itself. -i Interactively rename files or archive members. For each archive member matching a pattern operand or file matching a file operand, a prompt shall be written to the file /dev/tty. The prompt shall contain the name of the file or archive member, but the format is otherwise unspecified. A line shall then be read from /dev/tty. If this line is blank, the file or archive member shall be skipped. If this line consists of a single period, the file or archive member shall be processed with no modification to its name. Otherwise, its name shall be replaced with the contents of the line. The pax utility shall immediately exit with a non-zero exit status if end-of-file is encountered when reading a response or if /dev/tty cannot be opened for reading and writing. The results of extracting a hard link to a file that has been renamed during extraction are unspecified. -k Prevent the overwriting of existing files. -l (The letter ell.) In copy mode, hard links shall be made between the source and destination file hierarchies whenever possible. If specified in conjunction with -H or -L, when a symbolic link is encountered, the hard link created in the destination file hierarchy shall be to the file referenced by the symbolic link. If specified when neither -H nor -L is specified, when a symbolic link is encountered, the implementation shall create a hard link to the symbolic link in the source file hierarchy or copy the symbolic link to the destination. -L If a symbolic link referencing a file of type directory is specified on the command line or encountered during the traversal of a file hierarchy, pax shall archive the file hierarchy rooted in the file referenced by the link, using the name of the link as the root of the file hierarchy. Otherwise, if a symbolic link referencing a file of any other file type which pax can normally archive is specified on the command line or encountered during the traversal of a file hierarchy, pax shall archive the file referenced by the link, using the name of the link. The default behavior, when neither -H or -L are specified, shall be to archive the symbolic link itself. -n Select the first archive member that matches each pattern operand. No more than one archive member shall be matched for each pattern (although members of type directory shall still match the file hierarchy rooted at that file). -o options Provide information to the implementation to modify the algorithm for extracting or writing files. The value of options shall consist of one or more <comma>-separated keywords of the form: keyword[[:]=value][,keyword[[:]=value], ...] Some keywords apply only to certain file formats, as indicated with each description. Use of keywords that are inapplicable to the file format being processed produces undefined results. Keywords in the options argument shall be a string that would be a valid portable filename as described in the Base Definitions volume of POSIX.1‐2017, Section 3.282, Portable Filename Character Set. Note: Keywords are not expected to be filenames, merely to follow the same character composition rules as portable filenames. Keywords can be preceded with white space. The value field shall consist of zero or more characters; within value, the application shall precede any literal <comma> with a <backslash>, which shall be ignored, but preserves the <comma> as part of value. A <comma> as the final character, or a <comma> followed solely by white space as the final characters, in options shall be ignored. Multiple -o options can be specified; if keywords given to these multiple -o options conflict, the keywords and values appearing later in command line sequence shall take precedence and the earlier shall be silently ignored. The following keyword values of options shall be supported for the file formats as indicated: delete=pattern (Applicable only to the -x pax format.) When used in write or copy mode, pax shall omit from extended header records that it produces any keywords matching the string pattern. When used in read or list mode, pax shall ignore any keywords matching the string pattern in the extended header records. In both cases, matching shall be performed using the pattern matching notation described in Section 2.13.1, Patterns Matching a Single Character and Section 2.13.2, Patterns Matching Multiple Characters. For example: -o delete=security.* would suppress security-related information. See pax Extended Header for extended header record keyword usage. When multiple -odelete=pattern options are specified, the patterns shall be additive; all keywords matching the specified string patterns shall be omitted from extended header records that pax produces. exthdr.name=string (Applicable only to the -x pax format.) This keyword allows user control over the name that is written into the ustar header blocks for the extended header produced under the circumstances described in pax Header Block. The name shall be the contents of string, after the following character substitutions have been made: ┌──────────┬────────────────────────────────────────┐ │ string │ │ │Includes: │ Replaced by: │ ├──────────┼────────────────────────────────────────┤ │%d │ The directory name of the file, │ │ │ equivalent to the result of the │ │ │ dirname utility on the translated │ │ │ pathname. │ │%f │ The filename of the file, equivalent │ │ │ to the result of the basename utility │ │ │ on the translated pathname. │ │%p │ The process ID of the pax process. │ │%% │ A '%' character. │ └──────────┴────────────────────────────────────────┘ Any other '%' characters in string produce undefined results. If no -o exthdr.name=string is specified, pax shall use the following default value: %d/PaxHeaders.%p/%f globexthdr.name=string (Applicable only to the -x pax format.) When used in write or copy mode with the appropriate options, pax shall create global extended header records with ustar header blocks that will be treated as regular files by previous versions of pax. This keyword allows user control over the name that is written into the ustar header blocks for global extended header records. The name shall be the contents of string, after the following character substitutions have been made: ┌──────────┬────────────────────────────────────────┐ │ string │ │ │Includes: │ Replaced by: │ ├──────────┼────────────────────────────────────────┤ │%n │ An integer that represents the │ │ │ sequence number of the global extended │ │ │ header record in the archive, starting │ │ │ at 1. │ │%p │ The process ID of the pax process. │ │%% │ A '%' character. │ └──────────┴────────────────────────────────────────┘ Any other '%' characters in string produce undefined results. If no -o globexthdr.name=string is specified, pax shall use the following default value: $TMPDIR/GlobalHead.%p.%n where $TMPDIR represents the value of the TMPDIR environment variable. If TMPDIR is not set, pax shall use /tmp. invalid=action (Applicable only to the -x pax format.) This keyword allows user control over the action pax takes upon encountering values in an extended header record that, in read or copy mode, are invalid in the destination hierarchy or, in list mode, cannot be written in the codeset and current locale of the implementation. The following are invalid values that shall be recognized by pax: -- In read or copy mode, a filename or link name that contains character encodings invalid in the destination hierarchy. (For example, the name may contain embedded NULs.) -- In read or copy mode, a filename or link name that is longer than the maximum allowed in the destination hierarchy (for either a pathname component or the entire pathname). -- In list mode, any character string value (filename, link name, user name, and so on) that cannot be written in the codeset and current locale of the implementation. The following mutually-exclusive values of the action argument are supported: binary In write mode, pax shall generate a hdrcharset=BINARY extended header record for each file with a filename, link name, group name, owner name, or any other field in an extended header record that cannot be translated to the UTF‐8 codeset, allowing the archive to contain the files with unencoded extended header record values. In read or copy mode, pax shall use the values specified in the header without translation, regardless of whether this may overwrite an existing file with a valid name. In list mode, pax shall behave identically to the bypass action. bypass In read or copy mode, pax shall bypass the file, causing no change to the destination hierarchy. In list mode, pax shall write all requested valid values for the file, but its method for writing invalid values is unspecified. rename In read or copy mode, pax shall act as if the -i option were in effect for each file with invalid filename or link name values, allowing the user to provide a replacement name interactively. In list mode, pax shall behave identically to the bypass action. UTF‐8 When used in read, copy, or list mode and a filename, link name, owner name, or any other field in an extended header record cannot be translated from the pax UTF‐8 codeset format to the codeset and current locale of the implementation, pax shall use the actual UTF‐8 encoding for the name. If a hdrcharset extended header record is in effect for this file, the character set specified by that record shall be used instead of UTF‐8. If a hdrcharset=BINARY extended header record is in effect for this file, no translation shall be performed. write In read or copy mode, pax shall write the file, translating the name, regardless of whether this may overwrite an existing file with a valid name. In list mode, pax shall behave identically to the bypass action. If no -o invalid=option is specified, pax shall act as if -oinvalid=bypass were specified. Any overwriting of existing files that may be allowed by the -oinvalid= actions shall be subject to permission (-p) and modification time (-u) restrictions, and shall be suppressed if the -k option is also specified. linkdata (Applicable only to the -x pax format.) In write mode, pax shall write the contents of a file to the archive even when that file is merely a hard link to a file whose contents have already been written to the archive. listopt=format This keyword specifies the output format of the table of contents produced when the -v option is specified in list mode. See List Mode Format Specifications. To avoid ambiguity, the listopt=format shall be the only or final keyword=value pair in a -o option-argument; all characters in the remainder of the option- argument shall be considered part of the format string. When multiple -olistopt=format options are specified, the format strings shall be considered a single, concatenated string, evaluated in command line order. times (Applicable only to the -x pax format.) When used in write or copy mode, pax shall include atime and mtime extended header records for each file. See pax Extended Header File Times. In addition to these keywords, if the -x pax format is specified, any of the keywords and values defined in pax Extended Header, including implementation extensions, can be used in -o option-arguments, in either of two modes: keyword=value When used in write or copy mode, these keyword/value pairs shall be included at the beginning of the archive as typeflag g global extended header records. When used in read or list mode, these keyword/value pairs shall act as if they had been at the beginning of the archive as typeflag g global extended header records. keyword:=value When used in write or copy mode, these keyword/value pairs shall be included as records at the beginning of a typeflag x extended header for each file. (This shall be equivalent to the <equals-sign> form except that it creates no typeflag g global extended header records.) When used in read or list mode, these keyword/value pairs shall act as if they were included as records at the end of each extended header; thus, they shall override any global or file-specific extended header record keywords of the same names. For example, in the command: pax -r -o " gname:=mygroup, " <archive the group name will be forced to a new value for all files read from the archive. The precedence of -o keywords over various fields in the archive is described in pax Extended Header Keyword Precedence. If the -o delete=pattern, -o keyword=value, or -o keyword:=value options are used to override or remove any extended header data needed to find files in an archive (e.g., -o delete=size for a file whose size cannot be represented in a ustar header or -o size=100 for a file whose size is not 100 bytes), the behavior is undefined. -p string Specify one or more file characteristic options (privileges). The string option-argument shall be a string specifying file characteristics to be retained or discarded on extraction. The string shall consist of the specification characters a, e, m, o, and p. Other implementation-defined characters can be included. Multiple characteristics can be concatenated within the same string and multiple -p options can be specified. The meaning of the specification characters are as follows: a Do not preserve file access times. e Preserve the user ID, group ID, file mode bits (see the Base Definitions volume of POSIX.1‐2017, Section 3.169, File Mode Bits), access time, modification time, and any other implementation- defined file characteristics. m Do not preserve file modification times. o Preserve the user ID and group ID. p Preserve the file mode bits. Other implementation-defined file mode attributes may be preserved. In the preceding list, ``preserve'' indicates that an attribute stored in the archive shall be given to the extracted file, subject to the permissions of the invoking process. The access and modification times of the file shall be preserved unless otherwise specified with the -p option or not stored in the archive. All attributes that are not preserved shall be determined as part of the normal file creation action (see Section 1.1.1.4, File Read, Write, and Creation). If neither the e nor the o specification character is specified, or the user ID and group ID are not preserved for any reason, pax shall not set the S_ISUID and S_ISGID bits of the file mode. If the preservation of any of these items fails for any reason, pax shall write a diagnostic message to standard error. Failure to preserve these items shall affect the final exit status, but shall not cause the extracted file to be deleted. If file characteristic letters in any of the string option-arguments are duplicated or conflict with each other, the ones given last shall take precedence. For example, if -p eme is specified, file modification times are preserved. -s replstr Modify file or archive member names named by pattern or file operands according to the substitution expression replstr, using the syntax of the ed utility. The concepts of ``address'' and ``line'' are meaningless in the context of the pax utility, and shall not be supplied. The format shall be: -s /old/new/[gp] where as in ed, old is a basic regular expression and new can contain an <ampersand>, '\n' (where n is a digit) back-references, or subexpression matching. The old string shall also be permitted to contain <newline> characters. Any non-null character can be used as a delimiter ('/' shown here). Multiple -s expressions can be specified; the expressions shall be applied in the order specified, terminating with the first successful substitution. The optional trailing 'g' is as defined in the ed utility. The optional trailing 'p' shall cause successful substitutions to be written to standard error. File or archive member names that substitute to the empty string shall be ignored when reading and writing archives. -t When reading files from the file system, and if the user has the permissions required by utime() to do so, set the access time of each file read to the access time that it had before being read by pax. -u Ignore files that are older (having a less recent file modification time) than a pre-existing file or archive member with the same name. In read mode, an archive member with the same name as a file in the file system shall be extracted if the archive member is newer than the file. In write mode, an archive file member with the same name as a file in the file system shall be superseded if the file is newer than the archive member. If -a is also specified, this is accomplished by appending to the archive; otherwise, it is unspecified whether this is accomplished by actual replacement in the archive or by appending to the archive. In copy mode, the file in the destination hierarchy shall be replaced by the file in the source hierarchy or by a link to the file in the source hierarchy if the file in the source hierarchy is newer. -v In list mode, produce a verbose table of contents (see the STDOUT section). Otherwise, write archive member pathnames to standard error (see the STDERR section). -x format Specify the output archive format. The pax utility shall support the following formats: cpio The cpio interchange format; see the EXTENDED DESCRIPTION section. The default blocksize for this format for character special archive files shall be 5120. Implementations shall support all blocksize values less than or equal to 32256 that are multiples of 512. pax The pax interchange format; see the EXTENDED DESCRIPTION section. The default blocksize for this format for character special archive files shall be 5120. Implementations shall support all blocksize values less than or equal to 32256 that are multiples of 512. ustar The tar interchange format; see the EXTENDED DESCRIPTION section. The default blocksize for this format for character special archive files shall be 10240. Implementations shall support all blocksize values less than or equal to 32256 that are multiples of 512. Implementation-defined formats shall specify a default block size as well as any other block sizes supported for character special archive files. Any attempt to append to an archive file in a format different from the existing archive format shall cause pax to exit immediately with a non-zero exit status. -X When traversing the file hierarchy specified by a pathname, pax shall not descend into directories that have a different device ID (st_dev; see the System Interfaces volume of POSIX.1‐2017, stat()). Specifying more than one of the mutually-exclusive options -H and -L shall not be considered an error and the last option specified shall determine the behavior of the utility. The options that operate on the names of files or archive members (-c, -i, -n, -s, -u, and -v) shall interact as follows. In read mode, the archive members shall be selected based on the user- specified pattern operands as modified by the -c, -n, and -u options. Then, any -s and -i options shall modify, in that order, the names of the selected files. The -v option shall write names resulting from these modifications. In write mode, the files shall be selected based on the user- specified pathnames as modified by the -n and -u options. Then, any -s and -i options shall modify, in that order, the names of these selected files. The -v option shall write names resulting from these modifications. If both the -u and -n options are specified, pax shall not consider a file selected unless it is newer than the file to which it is compared. List Mode Format Specifications In list mode with the -o listopt=format option, the format argument shall be applied for each selected file. The pax utility shall append a <newline> to the listopt output for each selected file. The format argument shall be used as the format string described in the Base Definitions volume of POSIX.1‐2017, Chapter 5, File Format Notation, with the exceptions 1. through 6. defined in the EXTENDED DESCRIPTION section of printf, plus the following exceptions: 7. The sequence (keyword) can occur before a format conversion specifier. The conversion argument is defined by the value of keyword. The implementation shall support the following keywords: -- Any of the Field Name entries in Table 4-14, ustar Header Block and Table 4-16, Octet-Oriented cpio Archive Entry. The implementation may support the cpio keywords without the leading c_ in addition to the form required by Table 4-16, Octet-Oriented cpio Archive Entry. -- Any keyword defined for the extended header in pax Extended Header. -- Any keyword provided as an implementation-defined extension within the extended header defined in pax Extended Header. For example, the sequence "%(charset)s" is the string value of the name of the character set in the extended header. The result of the keyword conversion argument shall be the value from the applicable header field or extended header, without any trailing NULs. All keyword values used as conversion arguments shall be translated from the UTF‐8 encoding (or alternative encoding specified by any hdrcharset extended header record) to the character set appropriate for the local file system, user database, and so on, as applicable. 8. An additional conversion specifier character, T, shall be used to specify time formats. The T conversion specifier character can be preceded by the sequence (keyword=subformat), where subformat is a date format as defined by date operands. The default keyword shall be mtime and the default subformat shall be: %b %e %H:%M %Y 9. An additional conversion specifier character, M, shall be used to specify the file mode string as defined in ls Standard Output. If (keyword) is omitted, the mode keyword shall be used. For example, %.1M writes the single character corresponding to the <entry type> field of the ls -l command. 10. An additional conversion specifier character, D, shall be used to specify the device for block or special files, if applicable, in an implementation-defined format. If not applicable, and (keyword) is specified, then this conversion shall be equivalent to %(keyword)u. If not applicable, and (keyword) is omitted, then this conversion shall be equivalent to <space>. 11. An additional conversion specifier character, F, shall be used to specify a pathname. The F conversion character can be preceded by a sequence of <comma>-separated keywords: (keyword[,keyword] ... ) The values for all the keywords that are non-null shall be concatenated together, each separated by a '/'. The default shall be (path) if the keyword path is defined; otherwise, the default shall be (prefix,name). 12. An additional conversion specifier character, L, shall be used to specify a symbolic link expansion. If the current file is a symbolic link, then %L shall expand to: "%s -> %s", <value of keyword>, <contents of link> Otherwise, the %L conversion specification shall be the equivalent of %F.
# pax > Archiving and copying utility. More information: https://manned.org/pax.1p. * List the contents of an archive: `pax -f {{archive.tar}}` * List the contents of a gzipped archive: `pax -zf {{archive.tar.gz}}` * Create an archive from files: `pax -wf {{target.tar}} {{path/to/file1}} {{path/to/file2}} {{path/to/file3}}` * Create an archive from files, using output redirection: `pax -w {{path/to/file1}} {{path/to/file2}} {{path/to/file3}} > {{target.tar}}` * Extract an archive into the current directory: `pax -rf {{source.tar}}` * Copy to a directory, while keeping the original metadata; `target/` must exist: `pax -rw {{path/to/file1}} {{path/to/directory1}} {{path/to/directory2}} {{target/}}`
git-replace
Adds a replace reference in refs/replace/ namespace. The name of the replace reference is the SHA-1 of the object that is replaced. The content of the replace reference is the SHA-1 of the replacement object. The replaced object and the replacement object must be of the same type. This restriction can be bypassed using -f. Unless -f is given, the replace reference must not yet exist. There is no other restriction on the replaced and replacement objects. Merge commits can be replaced by non-merge commits and vice versa. Replacement references will be used by default by all Git commands except those doing reachability traversal (prune, pack transfer and fsck). It is possible to disable use of replacement references for any command using the --no-replace-objects option just after git. For example if commit foo has been replaced by commit bar: $ git --no-replace-objects cat-file commit foo shows information about commit foo, while: $ git cat-file commit foo shows information about commit bar. The GIT_NO_REPLACE_OBJECTS environment variable can be set to achieve the same effect as the --no-replace-objects option. -f, --force If an existing replace ref for the same object exists, it will be overwritten (instead of failing). -d, --delete Delete existing replace refs for the given objects. --edit <object> Edit an object’s content interactively. The existing content for <object> is pretty-printed into a temporary file, an editor is launched on the file, and the result is parsed to create a new object of the same type as <object>. A replacement ref is then created to replace <object> with the newly created object. See git-var(1) for details about how the editor will be chosen. --raw When editing, provide the raw object contents rather than pretty-printed ones. Currently this only affects trees, which will be shown in their binary form. This is harder to work with, but can help when repairing a tree that is so corrupted it cannot be pretty-printed. Note that you may need to configure your editor to cleanly read and write binary data. --graft <commit> [<parent>...] Create a graft commit. A new commit is created with the same content as <commit> except that its parents will be [<parent>...] instead of <commit>'s parents. A replacement ref is then created to replace <commit> with the newly created commit. Use --convert-graft-file to convert a $GIT_DIR/info/grafts file and use replace refs instead. --convert-graft-file Creates graft commits for all entries in $GIT_DIR/info/grafts and deletes that file upon success. The purpose is to help users with transitioning off of the now-deprecated graft file. -l <pattern>, --list <pattern> List replace refs for objects that match the given pattern (or all if no pattern is given). Typing "git replace" without arguments, also lists all replace refs. --format=<format> When listing, use the specified <format>, which can be one of short, medium and long. When omitted, the format defaults to short.
# git replace > Create, list, and delete refs to replace objects. More information: > https://git-scm.com/docs/git-replace. * Replace any commit with a different one, leaving other commits unchanged: `git replace {{object}} {{replacement}}` * Delete existing replace refs for the given objects: `git replace --delete {{object}}` * Edit an object’s content interactively: `git replace --edit {{object}}`
yes
Repeatedly output a line with all specified STRING(s), or 'y'. --help display this help and exit --version output version information and exit
# yes > Output something repeatedly. This command is commonly used to answer yes to > every prompt by install commands (such as apt-get). More information: > https://www.gnu.org/software/coreutils/yes. * Repeatedly output "message": `yes {{message}}` * Repeatedly output "y": `yes` * Accept everything prompted by the `apt-get` command: `yes | sudo apt-get install {{program}}`
mkdir
The mkdir utility shall create the directories specified by the operands, in the order specified. For each dir operand, the mkdir utility shall perform actions equivalent to the mkdir() function defined in the System Interfaces volume of POSIX.1‐2017, called with the following arguments: 1. The dir operand is used as the path argument. 2. The value of the bitwise-inclusive OR of S_IRWXU, S_IRWXG, and S_IRWXO is used as the mode argument. (If the -m option is specified, the value of the mkdir() mode argument is unspecified, but the directory shall at no time have permissions less restrictive than the -m mode option- argument.) The mkdir utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following options shall be supported: -m mode Set the file permission bits of the newly-created directory to the specified mode value. The mode option- argument shall be the same as the mode operand defined for the chmod utility. In the symbolic_mode strings, the op characters '+' and '-' shall be interpreted relative to an assumed initial mode of a=rwx; '+' shall add permissions to the default mode, '-' shall delete permissions from the default mode. -p Create any missing intermediate pathname components. For each dir operand that does not name an existing directory, before performing the actions described in the DESCRIPTION above, the mkdir utility shall create any pathname components of the path prefix of dir that do not name an existing directory by performing actions equivalent to first calling the mkdir() function with the following arguments: 1. A pathname naming the missing pathname component, ending with a trailing <slash> character, as the path argument 2. The value zero as the mode argument and then calling the chmod() function with the following arguments: 1. The same path argument as in the mkdir() call 2. The value (S_IWUSR|S_IXUSR|~filemask)&0777 as the mode argument, where filemask is the file mode creation mask of the process (see the System Interfaces volume of POSIX.1‐2017, umask(3p)) Each dir operand that names an existing directory shall be ignored without error.
# mkdir > Create directories and set their permissions. More information: > https://www.gnu.org/software/coreutils/mkdir. * Create specific directories: `mkdir {{path/to/directory1 path/to/directory2 ...}}` * Create specific directories and their [p]arents if needed: `mkdir -p {{path/to/directory1 path/to/directory2 ...}}` * Create directories with specific permissions: `mkdir -m {{rwxrw-r--}} {{path/to/directory1 path/to/directory2 ...}}`
ipcrm
ipcrm removes System V inter-process communication (IPC) objects and associated data structures from the system. In order to delete such objects, you must be superuser, or the creator or owner of the object. System V IPC objects are of three types: shared memory, message queues, and semaphores. Deletion of a message queue or semaphore object is immediate (regardless of whether any process still holds an IPC identifier for the object). A shared memory object is only removed after all currently attached processes have detached (shmdt(2)) the object from their virtual address space. Two syntax styles are supported. The old Linux historical syntax specifies a three-letter keyword indicating which class of object is to be deleted, followed by one or more IPC identifiers for objects of this type. The SUS-compliant syntax allows the specification of zero or more objects of all three types in a single command line, with objects specified either by key or by identifier (see below). Both keys and identifiers may be specified in decimal, hexadecimal (specified with an initial '0x' or '0X'), or octal (specified with an initial '0'). The details of the removes are described in shmctl(2), msgctl(2), and semctl(2). The identifiers and keys can be found by using ipcs(1). -a, --all [shm] [msg] [sem] Remove all resources. When an option argument is provided, the removal is performed only for the specified resource types. Warning! Do not use -a if you are unsure how the software using the resources might react to missing objects. Some programs create these resources at startup and may not have any code to deal with an unexpected disappearance. -M, --shmem-key shmkey Remove the shared memory segment created with shmkey after the last detach is performed. -m, --shmem-id shmid Remove the shared memory segment identified by shmid after the last detach is performed. -Q, --queue-key msgkey Remove the message queue created with msgkey. -q, --queue-id msgid Remove the message queue identified by msgid. -S, --semaphore-key semkey Remove the semaphore created with semkey. -s, --semaphore-id semid Remove the semaphore identified by semid. -h, --help Display help text and exit. -V, --version Print version and exit.
# ipcrm > Delete IPC (Inter-process Communication) resources. More information: > https://manned.org/ipcrm. * Delete a shared memory segment by ID: `ipcrm --shmem-id {{shmem_id}}` * Delete a shared memory segment by key: `ipcrm --shmem-key {{shmem_key}}` * Delete an IPC queue by ID: `ipcrm --queue-id {{ipc_queue_id}}` * Delete an IPC queue by key: `ipcrm --queue-key {{ipc_queue_key}}` * Delete a semaphore by ID: `ipcrm --semaphore-id {{semaphore_id}}` * Delete a semaphore by key: `ipcrm --semaphore-key {{semaphore_key}}` * Delete all IPC resources: `ipcrm --all`
chmod
The chmod utility shall change any or all of the file mode bits of the file named by each file operand in the way specified by the mode operand. It is implementation-defined whether and how the chmod utility affects any alternate or additional file access control mechanism (see the Base Definitions volume of POSIX.1‐2017, Section 4.5, File Access Permissions) being used for the specified file. Only a process whose effective user ID matches the user ID of the file, or a process with appropriate privileges, shall be permitted to change the file mode bits of a file. Upon successfully changing the file mode bits of a file, the chmod utility shall mark for update the last file status change timestamp of the file. The chmod utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following option shall be supported: -R Recursively change file mode bits. For each file operand that names a directory, chmod shall change the file mode bits of the directory and all files in the file hierarchy below it.
# chmod > Change the access permissions of a file or directory. More information: > https://www.gnu.org/software/coreutils/chmod. * Give the [u]ser who owns a file the right to e[x]ecute it: `chmod u+x {{path/to/file}}` * Give the [u]ser rights to [r]ead and [w]rite to a file/directory: `chmod u+rw {{path/to/file_or_directory}}` * Remove e[x]ecutable rights from the [g]roup: `chmod g-x {{path/to/file}}` * Give [a]ll users rights to [r]ead and e[x]ecute: `chmod a+rx {{path/to/file}}` * Give [o]thers (not in the file owner's group) the same rights as the [g]roup: `chmod o=g {{path/to/file}}` * Remove all rights from [o]thers: `chmod o= {{path/to/file}}` * Change permissions recursively giving [g]roup and [o]thers the ability to [w]rite: `chmod -R g+w,o+w {{path/to/directory}}` * Recursively give [a]ll users [r]ead permissions to files and e[X]ecute permissions to sub-directories within a directory: `chmod -R a+rX {{path/to/directory}}`
git-help
With no options and no <command> or <doc> given, the synopsis of the git command and a list of the most commonly used Git commands are printed on the standard output. If the option --all or -a is given, all available commands are printed on the standard output. If the option --guides or -g is given, a list of the Git concept guides is also printed on the standard output. If a command or other documentation is given, the relevant manual page will be brought up. The man program is used by default for this purpose, but this can be overridden by other options or configuration variables. If an alias is given, git shows the definition of the alias on standard output. To get the manual page for the aliased command, use git <command> --help. Note that git --help ... is identical to git help ... because the former is internally converted into the latter. To display the git(1) man page, use git help git. This page can be displayed with git help help or git help --help -a, --all Prints all the available commands on the standard output. --no-external-commands When used with --all, exclude the listing of external "git-*" commands found in the $PATH. --no-aliases When used with --all, exclude the listing of configured aliases. --verbose When used with --all print description for all recognized commands. This is the default. -c, --config List all available configuration variables. This is a short summary of the list in git-config(1). -g, --guides Prints a list of the Git concept guides on the standard output. --user-interfaces Prints a list of the repository, command and file interfaces documentation on the standard output. In-repository file interfaces such as .git/info/exclude are documented here (see gitrepository-layout(5)), as well as in-tree configuration such as .mailmap (see gitmailmap(5)). This section of the documentation also covers general or widespread user-interface conventions (e.g. gitcli(7)), and pseudo-configuration such as the file-based .git/hooks/* interface described in githooks(5). --developer-interfaces Print list of file formats, protocols and other developer interfaces documentation on the standard output. -i, --info Display manual page for the command in the info format. The info program will be used for that purpose. -m, --man Display manual page for the command in the man format. This option may be used to override a value set in the help.format configuration variable. By default the man program will be used to display the manual page, but the man.viewer configuration variable may be used to choose other display programs (see below). -w, --web Display manual page for the command in the web (HTML) format. A web browser will be used for that purpose. The web browser can be specified using the configuration variable help.browser, or web.browser if the former is not set. If none of these config variables is set, the git web--browse helper script (called by git help) will pick a suitable default. See git-web--browse(1) for more information about this.
# git help > Display help information about Git. More information: https://git- > scm.com/docs/git-help. * Display help about a specific Git subcommand: `git help {{subcommand}}` * Display help about a specific Git subcommand in a web browser: `git help --web {{subcommand}}` * Display a list of all available Git subcommands: `git help --all` * List the available guides: `git help --guide` * List all possible configuration variables: `git help --config`
sort
The sort utility shall perform one of the following functions: 1. Sort lines of all the named files together and write the result to the specified output. 2. Merge lines of all the named (presorted) files together and write the result to the specified output. 3. Check that a single input file is correctly presorted. Comparisons shall be based on one or more sort keys extracted from each line of input (or, if no sort keys are specified, the entire line up to, but not including, the terminating <newline>), and shall be performed using the collating sequence of the current locale. If this collating sequence does not have a total ordering of all characters (see the Base Definitions volume of POSIX.1‐2017, Section 7.3.2, LC_COLLATE), any lines of input that collate equally should be further compared byte-by-byte using the collating sequence for the POSIX locale. The sort utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines, except for Guideline 9, and the -k keydef option should follow the -b, -d, -f, -i, -n, and -r options. In addition, '+' may be recognized as an option delimiter as well as '-'. The following options shall be supported: -c Check that the single input file is ordered as specified by the arguments and the collating sequence of the current locale. Output shall not be sent to standard output. The exit code shall indicate whether or not disorder was detected or an error occurred. If disorder (or, with -u, a duplicate key) is detected, a warning message shall be sent to standard error indicating where the disorder or duplicate key was found. -C Same as -c, except that a warning message shall not be sent to standard error if disorder or, with -u, a duplicate key is detected. -m Merge only; the input file shall be assumed to be already sorted. -o output Specify the name of an output file to be used instead of the standard output. This file can be the same as one of the input files. -u Unique: suppress all but one in each set of lines having equal keys. If used with the -c option, check that there are no lines with duplicate keys, in addition to checking that the input file is sorted. The following options shall override the default ordering rules. When ordering options appear independent of any key field specifications, the requested field ordering rules shall be applied globally to all sort keys. When attached to a specific key (see -k), the specified ordering options shall override all global ordering options for that key. -d Specify that only <blank> characters and alphanumeric characters, according to the current setting of LC_CTYPE, shall be significant in comparisons. The behavior is undefined for a sort key to which -i or -n also applies. -f Consider all lowercase characters that have uppercase equivalents, according to the current setting of LC_CTYPE, to be the uppercase equivalent for the purposes of comparison. -i Ignore all characters that are non-printable, according to the current setting of LC_CTYPE. The behavior is undefined for a sort key for which -n also applies. -n Restrict the sort key to an initial numeric string, consisting of optional <blank> characters, optional <hyphen-minus> character, and zero or more digits with an optional radix character and thousands separators (as defined in the current locale), which shall be sorted by arithmetic value. An empty digit string shall be treated as zero. Leading zeros and signs on zeros shall not affect ordering. -r Reverse the sense of comparisons. The treatment of field separators can be altered using the options: -b Ignore leading <blank> characters when determining the starting and ending positions of a restricted sort key. If the -b option is specified before the first -k option, it shall be applied to all -k options. Otherwise, the -b option can be attached independently to each -k field_start or field_end option-argument (see below). -t char Use char as the field separator character; char shall not be considered to be part of a field (although it can be included in a sort key). Each occurrence of char shall be significant (for example, <char><char> delimits an empty field). If -t is not specified, <blank> characters shall be used as default field separators; each maximal non-empty sequence of <blank> characters that follows a non-<blank> shall be a field separator. Sort keys can be specified using the options: -k keydef The keydef argument is a restricted sort key field definition. The format of this definition is: field_start[type][,field_end[type]] where field_start and field_end define a key field restricted to a portion of the line (see the EXTENDED DESCRIPTION section), and type is one or more modifiers from the list of characters 'b', 'd', 'f', 'i', 'n', 'r'. The 'b' modifier shall behave like the -b option, but shall apply only to the field_start or field_end to which it is attached. The other modifiers shall behave like the corresponding options, but shall apply only to the key field to which they are attached; they shall have this effect if specified with field_start, field_end, or both. If any modifier is attached to a field_start or to a field_end, no option shall apply to either. Implementations shall support at least nine occurrences of the -k option, which shall be significant in command line order. If no -k option is specified, a default sort key of the entire line shall be used. When there are multiple key fields, later keys shall be compared only after all earlier keys compare equal. Except when the -u option is specified, lines that otherwise compare equal shall be ordered as if none of the options -d, -f, -i, -n, or -k were present (but with -r still in effect, if it was specified) and with all bytes in the lines significant to the comparison. The order in which lines that still compare equal are written is unspecified.
# sort > Sort lines of text files. More information: > https://www.gnu.org/software/coreutils/sort. * Sort a file in ascending order: `sort {{path/to/file}}` * Sort a file in descending order: `sort --reverse {{path/to/file}}` * Sort a file in case-insensitive way: `sort --ignore-case {{path/to/file}}` * Sort a file using numeric rather than alphabetic order: `sort --numeric-sort {{path/to/file}}` * Sort `/etc/passwd` by the 3rd field of each line numerically, using ":" as a field separator: `sort --field-separator={{:}} --key={{3n}} {{/etc/passwd}}` * Sort a file preserving only unique lines: `sort --unique {{path/to/file}}` * Sort a file, printing the output to the specified output file (can be used to sort a file in-place): `sort --output={{path/to/file}} {{path/to/file}}` * Sort numbers with exponents: `sort --general-numeric-sort {{path/to/file}}`
md5sum
Print or check MD5 (128-bit) checksums. With no FILE, or when FILE is -, read standard input. -b, --binary read in binary mode -c, --check read checksums from the FILEs and check them --tag create a BSD-style checksum -t, --text read in text mode (default) -z, --zero end each output line with NUL, not newline, and disable file name escaping The following five options are useful only when verifying checksums: --ignore-missing don't fail or report status for missing files --quiet don't print OK for each successfully verified file --status don't output anything, status code shows success --strict exit non-zero for improperly formatted checksum lines -w, --warn warn about improperly formatted checksum lines --help display this help and exit --version output version information and exit The sums are computed as described in RFC 1321. When checking, the input should be a former output of this program. The default mode is to print a line with: checksum, a space, a character indicating input mode ('*' for binary, ' ' for text or where binary is insignificant), and name for each FILE. Note: There is no difference between binary mode and text mode on GNU systems.
# md5sum > Calculate MD5 cryptographic checksums. More information: > https://www.gnu.org/software/coreutils/md5sum. * Calculate the MD5 checksum for one or more files: `md5sum {{path/to/file1 path/to/file2 ...}}` * Calculate and save the list of MD5 checksums to a file: `md5sum {{path/to/file1 path/to/file2 ...}} > {{path/to/file.md5}}` * Calculate an MD5 checksum from `stdin`: `{{command}} | md5sum` * Read a file of MD5 sums and filenames and verify all files have matching checksums: `md5sum --check {{path/to/file.md5}}` * Only show a message for missing files or when verification fails: `md5sum --check --quiet {{path/to/file.md5}}` * Only show a message when verification fails, ignoring missing files: `md5sum --ignore-missing --check --quiet {{path/to/file.md5}}`
kill
The default signal for kill is TERM. Use -l or -L to list available signals. Particularly useful signals include HUP, INT, KILL, STOP, CONT, and 0. Alternate signals may be specified in three ways: -9, -SIGKILL or -KILL. Negative PID values may be used to choose whole process groups; see the PGID column in ps command output. A PID of -1 is special; it indicates all processes except the kill process itself and init. <pid> [...] Send signal to every <pid> listed. -<signal> -s <signal> --signal <signal> Specify the signal to be sent. The signal can be specified by using name or number. The behavior of signals is explained in signal(7) manual page. -q, --queue value Use sigqueue(3) rather than kill(2) and the value argument is used to specify an integer to be sent with the signal. If the receiving process has installed a handler for this signal using the SA_SIGINFO flag to sigaction(2), then it can obtain this data via the si_value field of the siginfo_t structure. -l, --list [signal] List signal names. This option has optional argument, which will convert signal number to signal name, or other way round. -L, --table List signal names in a nice table.
# kill > Sends a signal to a process, usually related to stopping the process. All > signals except for SIGKILL and SIGSTOP can be intercepted by the process to > perform a clean exit. More information: https://manned.org/kill. * Terminate a program using the default SIGTERM (terminate) signal: `kill {{process_id}}` * List available signal names (to be used without the `SIG` prefix): `kill -l` * Terminate a background job: `kill %{{job_id}}` * Terminate a program using the SIGHUP (hang up) signal. Many daemons will reload instead of terminating: `kill -{{1|HUP}} {{process_id}}` * Terminate a program using the SIGINT (interrupt) signal. This is typically initiated by the user pressing `Ctrl + C`: `kill -{{2|INT}} {{process_id}}` * Signal the operating system to immediately terminate a program (which gets no chance to capture the signal): `kill -{{9|KILL}} {{process_id}}` * Signal the operating system to pause a program until a SIGCONT ("continue") signal is received: `kill -{{17|STOP}} {{process_id}}` * Send a `SIGUSR1` signal to all processes with the given GID (group id): `kill -{{SIGUSR1}} -{{group_id}}`
groff
groff is the primary front end to the GNU roff document formatting system. GNU roff is a typesetting system that reads plain text input files that include formatting commands to produce output in PostScript, PDF, HTML, DVI, or other formats, or for display to a terminal. Formatting commands can be low- level typesetting primitives, macros from a supplied package, or user-defined macros. All three approaches can be combined. If no file operands are specified, or if file is “-”, groff reads the standard input stream. A reimplementation and extension of the typesetter from AT&T Unix, groff is present on most POSIX systems owing to its long association with Unix manuals (including man pages). It and its predecessor are notable for their production of several best- selling software engineering texts. groff is capable of producing typographically sophisticated documents while consuming minimal system resources. The groff command orchestrates the execution of preprocessors, the transformation of input documents into a device-independent page description language, and the production of output from that language. -h and --help display a usage message and exit. Because groff is intended to subsume most users' direct invocations of the troff(1) formatter, the two programs share a set of options. However, groff has some options that troff does not share, and others which groff interprets differently. At the same time, not all valid troff options can be given to groff. groff-specific options The following options either do not exist in GNU troff or are interpreted differently by groff. -D enc Set fallback input encoding used by preconv(1) to enc; implies -k. -e Run eqn(1) preprocessor. -g Run grn(1) preprocessor. -G Run grap(1) preprocessor; implies -p. -I dir Works as troff's option (see below), but also implies -g and -s. It is passed to soelim(1) and the output driver, and grn is passed an -M option with dir as its argument. -j Run chem(1) preprocessor; implies -p. -k Run preconv(1) preprocessor. Refer to its man page for its behavior if neither of groff's -K or -D options is also specified. -K enc Set input encoding used by preconv(1) to enc; implies -k. -l Send the output to a spooler program for printing. The “print” directive in the device description file specifies the default command to be used; see groff_font(5). If no such directive is present for the output device, output is piped to lpr(1). See options -L and -X. -L arg Pass arg to the print spooler program. If multiple args are required, pass each with a separate -L option. groff does not prefix an option dash to arg before passing it to the spooler program. -M Works as troff's option (see below), but is also passed to eqn(1), grap(1), and grn(1). -N Prohibit newlines between eqn delimiters: pass -N to eqn(1). -p Run pic(1) preprocessor. -P arg Pass arg to the postprocessor. If multiple args are required, pass each with a separate -P option. groff does not prefix an option dash to arg before passing it to the postprocessor. -R Run refer(1) preprocessor. No mechanism is provided for passing arguments to refer because most refer options have equivalent language elements that can be specified within the document. -s Run soelim(1) preprocessor. -S Operate in “safer” mode; see -U below for its opposite. For security reasons, safer mode is enabled by default. -t Run tbl(1) preprocessor. -T dev Direct troff to format the input for the output device dev. groff then calls an output driver to convert troff's output to a form appropriate for dev; see subsection “Output devices” below. -U Operate in unsafe mode: pass the -U option to pic and troff. -v --version Write version information for groff and all programs run by it to the standard output stream; that is, the given command line is processed in the usual way, passing -v to the formatter and any pre- or postprocessors invoked. -V Output the pipeline that groff would run to the standard output stream, but do not execute it. If given more than once, groff both writes and runs the pipeline. -X Use gxditview(1) instead of the usual postprocessor to (pre)view a document on an X11 display. Combining this option with -Tps uses the font metrics of the PostScript device, whereas the -TX75 and -TX100 options use the metrics of X11 fonts. -Z Disable postprocessing. troff output will appear on the standard output stream (unless suppressed with -z); see groff_out(5) for a description of this format. Transparent options The following options are passed as-is to the formatter program troff(1) and described in more detail in its man page. -a Generate a plain text approximation of the typeset output. -b Write a backtrace to the standard error stream on each error or warning. -c Start with color output disabled. -C Enable AT&T troff compatibility mode; implies -c. -d cs -d name=string Define string. -E Inhibit troff error messages; implies -Ww. -f fam Set default font family. -F dir Search in directory dir for the selected output device's directory of device and font description files. -i Process standard input after the specified input files. -I dir Search dir for input files. -m name Process name.tmac before input files. -M dir Search directory dir for macro files. -n num Number the first page num. -o list Output only pages in list. -r cnumeric-expression -r register=numeric-expression Define register. -w name -W name Enable (-w) or inhibit (-W) emission of warnings in category name. -z Suppress formatted device-independent output of troff.
# groff > GNU replacement for the `troff` and `nroff` typesetting utilities. More > information: https://www.gnu.org/software/groff. * Format output for a PostScript printer, saving the output to a file: `groff {{path/to/input.roff}} > {{path/to/output.ps}}` * Render a man page using the ASCII output device, and display it using a pager: `groff -man -T ascii {{path/to/manpage.1}} | less --RAW-CONTROL-CHARS` * Render a man page into an HTML file: `groff -man -T html {{path/to/manpage.1}} > {{path/to/manpage.html}}` * Typeset a roff file containing [t]ables and [p]ictures, using the [me] macro set, to PDF, saving the output: `groff {{-t}} {{-p}} -{{me}} -T {{pdf}} {{path/to/input.me}} > {{path/to/output.pdf}}` * Run a `groff` command with preprocessor and macro options guessed by the `grog` utility: `eval "$(grog -T utf8 {{path/to/input.me}})"`
git-checkout-index
Will copy all files listed from the index to the working directory (not overwriting existing files). -u, --index update stat information for the checked out entries in the index file. -q, --quiet be quiet if files exist or are not in the index -f, --force forces overwrite of existing files -a, --all checks out all files in the index except for those with the skip-worktree bit set (see --ignore-skip-worktree-bits). Cannot be used together with explicit filenames. -n, --no-create Don’t checkout new files, only refresh files already checked out. --prefix=<string> When creating files, prepend <string> (usually a directory including a trailing /) --stage=<number>|all Instead of checking out unmerged entries, copy out the files from named stage. <number> must be between 1 and 3. Note: --stage=all automatically implies --temp. --temp Instead of copying the files to the working directory write the content to temporary files. The temporary name associations will be written to stdout. --ignore-skip-worktree-bits Check out all files, including those with the skip-worktree bit set. --stdin Instead of taking list of paths from the command line, read list of paths from the standard input. Paths are separated by LF (i.e. one path per line) by default. -z Only meaningful with --stdin; paths are separated with NUL character instead of LF. -- Do not interpret any more arguments as options. The order of the flags used to matter, but not anymore. Just doing git checkout-index does nothing. You probably meant git checkout-index -a. And if you want to force it, you want git checkout-index -f -a. Intuitiveness is not the goal here. Repeatability is. The reason for the "no arguments means no work" behavior is that from scripts you are supposed to be able to do: $ find . -name '*.h' -print0 | xargs -0 git checkout-index -f -- which will force all existing *.h files to be replaced with their cached copies. If an empty command line implied "all", then this would force-refresh everything in the index, which was not the point. But since git checkout-index accepts --stdin it would be faster to use: $ find . -name '*.h' -print0 | git checkout-index -f -z --stdin The -- is just a good idea when you know the rest will be filenames; it will prevent problems with a filename of, for example, -a. Using -- is probably a good policy in scripts.
# git checkout-index > Copy files from the index to the working tree. More information: > https://git-scm.com/docs/git-checkout-index. * Restore any files deleted since the last commit: `git checkout-index --all` * Restore any files deleted or changed since the last commit: `git checkout-index --all --force` * Restore any files changed since the last commit, ignoring any files that were deleted: `git checkout-index --all --force --no-create` * Export a copy of the entire tree at the last commit to the specified directory (the trailing slash is important): `git checkout-index --all --force --prefix={{path/to/export_directory/}}`
trace-cmd
The trace-cmd(1) command interacts with the Ftrace tracer that is built inside the Linux kernel. It interfaces with the Ftrace specific files found in the debugfs file system under the tracing directory. A COMMAND must be specified to tell trace-cmd what to do. -h, --help Display the help text. Other options see the man page for the corresponding command.
# trace-cmd > Utility to interact with the Ftrace Linux kernel internal tracer. This > utility only runs as root. More information: https://manned.org/trace-cmd. * Display the status of tracing system: `trace-cmd stat` * List available tracers: `trace-cmd list -t` * Start tracing with a specific plugin: `trace-cmd start -p {{timerlat|osnoise|hwlat|blk|mmiotrace|function_graph|wakeup_dl|wakeup_rt|wakeup|function|nop}}` * View the trace output: `trace-cmd show` * Stop the tracing but retain the buffers: `trace-cmd stop` * Clear the trace buffers: `trace-cmd clear` * Clear the trace buffers and stop tracing: `trace-cmd reset`
umask
The umask utility shall set the file mode creation mask of the current shell execution environment (see Section 2.12, Shell Execution Environment) to the value specified by the mask operand. This mask shall affect the initial value of the file permission bits of subsequently created files. If umask is called in a subshell or separate utility execution environment, such as one of the following: (umask 002) nohup umask ... find . -exec umask ... \; it shall not affect the file mode creation mask of the caller's environment. If the mask operand is not specified, the umask utility shall write to standard output the value of the file mode creation mask of the invoking process. The umask utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following option shall be supported: -S Produce symbolic output. The default output style is unspecified, but shall be recognized on a subsequent invocation of umask on the same system as a mask operand to restore the previous file mode creation mask.
# umask > Manage the read/write/execute permissions that are masked out (i.e. > restricted) for newly created files by the user. More information: > https://manned.org/umask. * Display the current mask in octal notation: `umask` * Display the current mask in symbolic (human-readable) mode: `umask -S` * Change the mask symbolically to allow read permission for all users (the rest of the mask bits are unchanged): `umask {{a+r}}` * Set the mask (using octal) to restrict no permissions for the file's owner, and restrict all permissions for everyone else: `umask {{077}}`
touch
Update the access and modification times of each FILE to the current time. A FILE argument that does not exist is created empty, unless -c or -h is supplied. A FILE argument string of - is handled specially and causes touch to change the times of the file associated with standard output. Mandatory arguments to long options are mandatory for short options too. -a change only the access time -c, --no-create do not create any files -d, --date=STRING parse STRING and use it instead of current time -f (ignored) -h, --no-dereference affect each symbolic link instead of any referenced file (useful only on systems that can change the timestamps of a symlink) -m change only the modification time -r, --reference=FILE use this file's times instead of current time -t STAMP use [[CC]YY]MMDDhhmm[.ss] instead of current time --time=WORD change the specified time: WORD is access, atime, or use: equivalent to -a WORD is modify or mtime: equivalent to -m --help display this help and exit --version output version information and exit Note that the -d and -t options accept different time-date formats.
# touch > Create files and set access/modification times. More information: > https://manned.org/man/freebsd-13.1/touch. * Create specific files: `touch {{path/to/file1 path/to/file2 ...}}` * Set the file [a]ccess or [m]odification times to the current one and don't [c]reate file if it doesn't exist: `touch -c -{{a|m}} {{path/to/file1 path/to/file2 ...}}` * Set the file [t]ime to a specific value and don't [c]reate file if it doesn't exist: `touch -c -t {{YYYYMMDDHHMM.SS}} {{path/to/file1 path/to/file2 ...}}` * Set the file time of a specific file to the time of anothe[r] file and don't [c]reate file if it doesn't exist: `touch -c -r {{~/.emacs}} {{path/to/file1 path/to/file2 ...}}`
echo
Echo the STRING(s) to standard output. -n do not output the trailing newline -e enable interpretation of backslash escapes -E disable interpretation of backslash escapes (default) --help display this help and exit --version output version information and exit If -e is in effect, the following sequences are recognized: \\ backslash \a alert (BEL) \b backspace \c produce no further output \e escape \f form feed \n new line \r carriage return \t horizontal tab \v vertical tab \0NNN byte with octal value NNN (1 to 3 digits) \xHH byte with hexadecimal value HH (1 to 2 digits) NOTE: your shell may have its own version of echo, which usually supersedes the version described here. Please refer to your shell's documentation for details about the options it supports. NOTE: printf(1) is a preferred alternative, which does not have issues outputting option-like strings.
# echo > Print given arguments. More information: > https://www.gnu.org/software/coreutils/echo. * Print a text message. Note: quotes are optional: `echo "{{Hello World}}"` * Print a message with environment variables: `echo "{{My path is $PATH}}"` * Print a message without the trailing newline: `echo -n "{{Hello World}}"` * Append a message to the file: `echo "{{Hello World}}" >> {{file.txt}}` * Enable interpretation of backslash escapes (special characters): `echo -e "{{Column 1\tColumn 2}}"` * Print the exit status of the last executed command (Note: In Windows Command Prompt and PowerShell the equivalent commands are `echo %errorlevel%` and `$lastexitcode` respectively): `echo $?`
systemctl
systemctl may be used to introspect and control the state of the "systemd" system and service manager. Please refer to systemd(1) for an introduction into the basic concepts and functionality this tool manages. The following options are understood: -t, --type= The argument is a comma-separated list of unit types such as service and socket. When units are listed with list-units, list-dependencies, show, or status, only units of the specified types will be shown. By default, units of all types are shown. As a special case, if one of the arguments is help, a list of allowed values will be printed and the program will exit. --state= The argument is a comma-separated list of unit LOAD, SUB, or ACTIVE states. When listing units with list-units, list-dependencies, show or status, show only those in the specified states. Use --state=failed or --failed to show only failed units. As a special case, if one of the arguments is help, a list of allowed values will be printed and the program will exit. -p, --property= When showing unit/job/manager properties with the show command, limit display to properties specified in the argument. The argument should be a comma-separated list of property names, such as "MainPID". Unless specified, all known properties are shown. If specified more than once, all properties with the specified names are shown. Shell completion is implemented for property names. For the manager itself, systemctl show will show all available properties, most of which are derived or closely match the options described in systemd-system.conf(5). Properties for units vary by unit type, so showing any unit (even a non-existent one) is a way to list properties pertaining to this type. Similarly, showing any job will list properties pertaining to all jobs. Properties for units are documented in systemd.unit(5), and the pages for individual unit types systemd.service(5), systemd.socket(5), etc. -P Equivalent to --value --property=, i.e. shows the value of the property without the property name or "=". Note that using -P once will also affect all properties listed with -p/--property=. -a, --all When listing units with list-units, also show inactive units and units which are following other units. When showing unit/job/manager properties, show all properties regardless whether they are set or not. To list all units installed in the file system, use the list-unit-files command instead. When listing units with list-dependencies, recursively show dependencies of all dependent units (by default only dependencies of target units are shown). When used with status, show journal messages in full, even if they include unprintable characters or are very long. By default, fields with unprintable characters are abbreviated as "blob data". (Note that the pager may escape unprintable characters again.) -r, --recursive When listing units, also show units of local containers. Units of local containers will be prefixed with the container name, separated by a single colon character (":"). --reverse Show reverse dependencies between units with list-dependencies, i.e. follow dependencies of type WantedBy=, RequiredBy=, UpheldBy=, PartOf=, BoundBy=, instead of Wants= and similar. --after With list-dependencies, show the units that are ordered before the specified unit. In other words, recursively list units following the After= dependency. Note that any After= dependency is automatically mirrored to create a Before= dependency. Temporal dependencies may be specified explicitly, but are also created implicitly for units which are WantedBy= targets (see systemd.target(5)), and as a result of other directives (for example RequiresMountsFor=). Both explicitly and implicitly introduced dependencies are shown with list-dependencies. When passed to the list-jobs command, for each printed job show which other jobs are waiting for it. May be combined with --before to show both the jobs waiting for each job as well as all jobs each job is waiting for. --before With list-dependencies, show the units that are ordered after the specified unit. In other words, recursively list units following the Before= dependency. When passed to the list-jobs command, for each printed job show which other jobs it is waiting for. May be combined with --after to show both the jobs waiting for each job as well as all jobs each job is waiting for. --with-dependencies When used with status, cat, list-units, and list-unit-files, those commands print all specified units and the dependencies of those units. Options --reverse, --after, --before may be used to change what types of dependencies are shown. -l, --full Do not ellipsize unit names, process tree entries, journal output, or truncate unit descriptions in the output of status, list-units, list-jobs, and list-timers. Also, show installation targets in the output of is-enabled. --value When printing properties with show, only print the value, and skip the property name and "=". Also see option -P above. --show-types When showing sockets, show the type of the socket. --job-mode= When queuing a new job, this option controls how to deal with already queued jobs. It takes one of "fail", "replace", "replace-irreversibly", "isolate", "ignore-dependencies", "ignore-requirements", "flush", or "triggering". Defaults to "replace", except when the isolate command is used which implies the "isolate" job mode. If "fail" is specified and a requested operation conflicts with a pending job (more specifically: causes an already pending start job to be reversed into a stop job or vice versa), cause the operation to fail. If "replace" (the default) is specified, any conflicting pending job will be replaced, as necessary. If "replace-irreversibly" is specified, operate like "replace", but also mark the new jobs as irreversible. This prevents future conflicting transactions from replacing these jobs (or even being enqueued while the irreversible jobs are still pending). Irreversible jobs can still be cancelled using the cancel command. This job mode should be used on any transaction which pulls in shutdown.target. "isolate" is only valid for start operations and causes all other units to be stopped when the specified unit is started. This mode is always used when the isolate command is used. "flush" will cause all queued jobs to be canceled when the new job is enqueued. If "ignore-dependencies" is specified, then all unit dependencies are ignored for this new job and the operation is executed immediately. If passed, no required units of the unit passed will be pulled in, and no ordering dependencies will be honored. This is mostly a debugging and rescue tool for the administrator and should not be used by applications. "ignore-requirements" is similar to "ignore-dependencies", but only causes the requirement dependencies to be ignored, the ordering dependencies will still be honored. "triggering" may only be used with systemctl stop. In this mode, the specified unit and any active units that trigger it are stopped. See the discussion of Triggers= in systemd.unit(5) for more information about triggering units. -T, --show-transaction When enqueuing a unit job (for example as effect of a systemctl start invocation or similar), show brief information about all jobs enqueued, covering both the requested job and any added because of unit dependencies. Note that the output will only include jobs immediately part of the transaction requested. It is possible that service start-up program code run as effect of the enqueued jobs might request further jobs to be pulled in. This means that completion of the listed jobs might ultimately entail more jobs than the listed ones. --fail Shorthand for --job-mode=fail. When used with the kill command, if no units were killed, the operation results in an error. --check-inhibitors= When system shutdown or sleep state is requested, this option controls checking of inhibitor locks. It takes one of "auto", "yes" or "no". Defaults to "auto", which will behave like "yes" for interactive invocations (i.e. from a TTY) and "no" for non-interactive invocations. "yes" lets the request respect inhibitor locks. "no" lets the request ignore inhibitor locks. Applications can establish inhibitor locks to prevent certain important operations (such as CD burning) from being interrupted by system shutdown or sleep. Any user may take these locks and privileged users may override these locks. If any locks are taken, shutdown and sleep state requests will normally fail (unless privileged). However, if "no" is specified or "auto" is specified on a non-interactive requests, the operation will be attempted. If locks are present, the operation may require additional privileges. Option --force provides another way to override inhibitors. -i Shortcut for --check-inhibitors=no. --dry-run Just print what would be done. Currently supported by verbs halt, poweroff, reboot, kexec, suspend, hibernate, hybrid-sleep, suspend-then-hibernate, default, rescue, emergency, and exit. -q, --quiet Suppress printing of the results of various commands and also the hints about truncated log lines. This does not suppress output of commands for which the printed output is the only result (like show). Errors are always printed. --no-warn Don't generate the warnings shown by default in the following cases: • when systemctl is invoked without procfs mounted on /proc/, • when using enable or disable on units without install information (i.e. don't have or have an empty [Install] section), • when using disable combined with --user on units that are enabled in global scope. --no-block Do not synchronously wait for the requested operation to finish. If this is not specified, the job will be verified, enqueued and systemctl will wait until the unit's start-up is completed. By passing this argument, it is only verified and enqueued. This option may not be combined with --wait. --wait Synchronously wait for started units to terminate again. This option may not be combined with --no-block. Note that this will wait forever if any given unit never terminates (by itself or by getting stopped explicitly); particularly services which use "RemainAfterExit=yes". When used with is-system-running, wait until the boot process is completed before returning. --user Talk to the service manager of the calling user, rather than the service manager of the system. --system Talk to the service manager of the system. This is the implied default. --failed List units in failed state. This is equivalent to --state=failed. --no-wall Do not send wall message before halt, power-off and reboot. --global When used with enable and disable, operate on the global user configuration directory, thus enabling or disabling a unit file globally for all future logins of all users. --no-reload When used with enable and disable, do not implicitly reload daemon configuration after executing the changes. --no-ask-password When used with start and related commands, disables asking for passwords. Background services may require input of a password or passphrase string, for example to unlock system hard disks or cryptographic certificates. Unless this option is specified and the command is invoked from a terminal, systemctl will query the user on the terminal for the necessary secrets. Use this option to switch this behavior off. In this case, the password must be supplied by some other means (for example graphical password agents) or the service might fail. This also disables querying the user for authentication for privileged operations. --kill-whom= When used with kill, choose which processes to send a UNIX process signal to. Must be one of main, control or all to select whether to kill only the main process, the control process or all processes of the unit. The main process of the unit is the one that defines the life-time of it. A control process of a unit is one that is invoked by the manager to induce state changes of it. For example, all processes started due to the ExecStartPre=, ExecStop= or ExecReload= settings of service units are control processes. Note that there is only one control process per unit at a time, as only one state change is executed at a time. For services of type Type=forking, the initial process started by the manager for ExecStart= is a control process, while the process ultimately forked off by that one is then considered the main process of the unit (if it can be determined). This is different for service units of other types, where the process forked off by the manager for ExecStart= is always the main process itself. A service unit consists of zero or one main process, zero or one control process plus any number of additional processes. Not all unit types manage processes of these types however. For example, for mount units, control processes are defined (which are the invocations of /usr/bin/mount and /usr/bin/umount), but no main process is defined. If omitted, defaults to all. --kill-value=INT If used with the kill command, enqueues a signal along with the specified integer value parameter to the specified process(es). This operation is only available for POSIX Realtime Signals (i.e. --signal=SIGRTMIN+... or --signal=SIGRTMAX-...), and ensures the signals are generated via the sigqueue(3) system call, rather than kill(3). The specified value must be a 32bit signed integer, and may be specified either in decimal, in hexadecimal (if prefixed with "0x"), octal (if prefixed with "0o") or binary (if prefixed with "0b") If this option is used the signal will only be enqueued on the control or main process of the unit, never on other processes belonging to the unit, i.e. --kill-whom=all will only affect main and control processes but no other processes. -s, --signal= When used with kill, choose which signal to send to selected processes. Must be one of the well-known signal specifiers such as SIGTERM, SIGINT or SIGSTOP. If omitted, defaults to SIGTERM. The special value "help" will list the known values and the program will exit immediately, and the special value "list" will list known values along with the numerical signal numbers and the program will exit immediately. --what= Select what type of per-unit resources to remove when the clean command is invoked, see above. Takes one of configuration, state, cache, logs, runtime, fdstore to select the type of resource. This option may be specified more than once, in which case all specified resource types are removed. Also accepts the special value all as a shortcut for specifying all six resource types. If this option is not specified defaults to the combination of cache, runtime and fdstore, i.e. the three kinds of resources that are generally considered to be redundant and can be reconstructed on next invocation. Note that the explicit removal of the fdstore resource type is only useful if the FileDescriptorStorePreserve= option is enabled, since the file descriptor store is otherwise cleaned automatically when the unit is stopped. -f, --force When used with enable, overwrite any existing conflicting symlinks. When used with edit, create all of the specified units which do not already exist. When used with halt, poweroff, reboot or kexec, execute the selected operation without shutting down all units. However, all processes will be killed forcibly and all file systems are unmounted or remounted read-only. This is hence a drastic but relatively safe option to request an immediate reboot. If --force is specified twice for these operations (with the exception of kexec), they will be executed immediately, without terminating any processes or unmounting any file systems. Warning: specifying --force twice with any of these operations might result in data loss. Note that when --force is specified twice the selected operation is executed by systemctl itself, and the system manager is not contacted. This means the command should succeed even when the system manager has crashed. --message= When used with halt, poweroff or reboot, set a short message explaining the reason for the operation. The message will be logged together with the default shutdown message. --now When used with enable, the units will also be started. When used with disable or mask, the units will also be stopped. The start or stop operation is only carried out when the respective enable or disable operation has been successful. --root= When used with enable/disable/is-enabled (and related commands), use the specified root path when looking for unit files. If this option is present, systemctl will operate on the file system directly, instead of communicating with the systemd daemon to carry out changes. --image=image Takes a path to a disk image file or block device node. If specified, all operations are applied to file system in the indicated disk image. This option is similar to --root=, but operates on file systems stored in disk images or block devices. The disk image should either contain just a file system or a set of file systems within a GPT partition table, following the Discoverable Partitions Specification[2]. For further information on supported disk images, see systemd-nspawn(1)'s switch of the same name. --image-policy=policy Takes an image policy string as argument, as per systemd.image-policy(7). The policy is enforced when operating on the disk image specified via --image=, see above. If not specified defaults to the "*" policy, i.e. all recognized file systems in the image are used. --runtime When used with enable, disable, edit, (and related commands), make changes only temporarily, so that they are lost on the next reboot. This will have the effect that changes are not made in subdirectories of /etc/ but in /run/, with identical immediate effects, however, since the latter is lost on reboot, the changes are lost too. Similarly, when used with set-property, make changes only temporarily, so that they are lost on the next reboot. --preset-mode= Takes one of "full" (the default), "enable-only", "disable-only". When used with the preset or preset-all commands, controls whether units shall be disabled and enabled according to the preset rules, or only enabled, or only disabled. -n, --lines= When used with status, controls the number of journal lines to show, counting from the most recent ones. Takes a positive integer argument, or 0 to disable journal output. Defaults to 10. -o, --output= When used with status, controls the formatting of the journal entries that are shown. For the available choices, see journalctl(1). Defaults to "short". --firmware-setup When used with the reboot command, indicate to the system's firmware to reboot into the firmware setup interface. Note that this functionality is not available on all systems. --boot-loader-menu=timeout When used with the reboot command, indicate to the system's boot loader to show the boot loader menu on the following boot. Takes a time value as parameter — indicating the menu timeout. Pass zero in order to disable the menu timeout. Note that not all boot loaders support this functionality. --boot-loader-entry=ID When used with the reboot command, indicate to the system's boot loader to boot into a specific boot loader entry on the following boot. Takes a boot loader entry identifier as argument, or "help" in order to list available entries. Note that not all boot loaders support this functionality. --reboot-argument= This switch is used with reboot. The value is architecture and firmware specific. As an example, "recovery" might be used to trigger system recovery, and "fota" might be used to trigger a “firmware over the air” update. --plain When used with list-dependencies, list-units or list-machines, the output is printed as a list instead of a tree, and the bullet circles are omitted. --timestamp= Change the format of printed timestamps. The following values may be used: pretty (this is the default) "Day YYYY-MM-DD HH:MM:SS TZ" unix "@seconds-since-the-epoch" us, μs "Day YYYY-MM-DD HH:MM:SS.UUUUUU TZ" utc "Day YYYY-MM-DD HH:MM:SS UTC" us+utc, μs+utc "Day YYYY-MM-DD HH:MM:SS.UUUUUU UTC" --mkdir When used with bind, creates the destination file or directory before applying the bind mount. Note that even though the name of this option suggests that it is suitable only for directories, this option also creates the destination file node to mount over if the object to mount is not a directory, but a regular file, device node, socket or FIFO. --marked Only allowed with reload-or-restart. Enqueues restart jobs for all units that have the "needs-restart" mark, and reload jobs for units that have the "needs-reload" mark. When a unit marked for reload does not support reload, restart will be queued. Those properties can be set using set-property Markers=.... Unless --no-block is used, systemctl will wait for the queued jobs to finish. --read-only When used with bind, creates a read-only bind mount. --drop-in= When used with edit, use the given drop-in file name instead of override.conf. --when= When used with halt, poweroff, reboot or kexec, schedule the action to be performed at the given timestamp, which should adhere to the syntax documented in systemd.time(7) section "PARSING TIMESTAMPS". Specially, if "show" is given, the currently scheduled action will be shown, which can be canceled by passing an empty string or "cancel". -H, --host= Execute the operation remotely. Specify a hostname, or a username and hostname separated by "@", to connect to. The hostname may optionally be suffixed by a port ssh is listening on, separated by ":", and then a container name, separated by "/", which connects directly to a specific container on the specified host. This will use SSH to talk to the remote machine manager instance. Container names may be enumerated with machinectl -H HOST. Put IPv6 addresses in brackets. -M, --machine= Execute operation on a local container. Specify a container name to connect to, optionally prefixed by a user name to connect as and a separating "@" character. If the special string ".host" is used in place of the container name, a connection to the local system is made (which is useful to connect to a specific user's user bus: "--user --machine=lennart@.host"). If the "@" syntax is not used, the connection is made as root user. If the "@" syntax is used either the left hand side or the right hand side may be omitted (but not both) in which case the local user name and ".host" are implied. --no-pager Do not pipe output into a pager. --legend=BOOL Enable or disable printing of the legend, i.e. column headers and the footer with hints. The legend is printed by default, unless disabled with --quiet or similar. -h, --help Print a short help text and exit. --version Print a short version string and exit.
# systemctl > Control the systemd system and service manager. More information: > https://www.freedesktop.org/software/systemd/man/systemctl.html. * Show all running services: `systemctl status` * List failed units: `systemctl --failed` * Start/Stop/Restart/Reload a service: `systemctl {{start|stop|restart|reload}} {{unit}}` * Show the status of a unit: `systemctl status {{unit}}` * Enable/Disable a unit to be started on bootup: `systemctl {{enable|disable}} {{unit}}` * Mask/Unmask a unit to prevent enablement and manual activation: `systemctl {{mask|unmask}} {{unit}}` * Reload systemd, scanning for new or changed units: `systemctl daemon-reload` * Check if a unit is enabled: `systemctl is-enabled {{unit}}`
patch
The patch utility shall read a source (patch) file containing any of four forms of difference (diff) listings produced by the diff utility (normal, copied context, unified context, or in the style of ed) and apply those differences to a file. By default, patch shall read from the standard input. The patch utility shall attempt to determine the type of the diff listing, unless overruled by a -c, -e, -n, or -u option. If the patch file contains more than one patch, patch shall attempt to apply each of them as if they came from separate patch files. (In this case, the application shall ensure that the name of the patch file is determinable for each diff listing.) The patch utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following options shall be supported: -b Save a copy of the original contents of each modified file, before the differences are applied, in a file of the same name with the suffix .orig appended to it. If the file already exists, it shall be overwritten; if multiple patches are applied to the same file, the .orig file shall be written only for the first patch. When the -o outfile option is also specified, file.orig shall not be created but, if outfile already exists, outfile.orig shall be created. -c Interpret the patch file as a copied context difference (the output of the utility diff when the -c or -C options are specified). -d dir Change the current directory to dir before processing as described in the EXTENDED DESCRIPTION section. -D define Mark changes with one of the following C preprocessor constructs: #ifdef define ... #endif #ifndef define ... #endif optionally combined with the C preprocessor construct #else. If the patched file is processed with the C preprocessor, where the macro define is defined, the output shall contain the changes from the patch file; otherwise, the output shall not contain the patches specified in the patch file. -e Interpret the patch file as an ed script, rather than a diff script. -i patchfile Read the patch information from the file named by the pathname patchfile, rather than the standard input. -l (The letter ell.) Cause any sequence of <blank> characters in the difference script to match any sequence of <blank> characters in the input file. Other characters shall be matched exactly. -n Interpret the script as a normal difference. -N Ignore patches where the differences have already been applied to the file; by default, already-applied patches shall be rejected. -o outfile Instead of modifying the files (specified by the file operand or the difference listings) directly, write a copy of the file referenced by each patch, with the appropriate differences applied, to outfile. Multiple patches for a single file shall be applied to the intermediate versions of the file created by any previous patches, and shall result in multiple, concatenated versions of the file being written to outfile. -p num For all pathnames in the patch file that indicate the names of files to be patched, delete num pathname components from the beginning of each pathname. If the pathname in the patch file is absolute, any leading <slash> characters shall be considered the first component (that is, -p 1 shall remove the leading <slash> characters). Specifying -p 0 shall cause the full pathname to be used. If -p is not specified, only the basename (the final pathname component) shall be used. -R Reverse the sense of the patch script; that is, assume that the difference script was created from the new version to the old version. The -R option cannot be used with ed scripts. The patch utility shall attempt to reverse each portion of the script before applying it. Rejected differences shall be saved in swapped format. If this option is not specified, and until a portion of the patch file is successfully applied, patch attempts to apply each portion in its reversed sense as well as in its normal sense. If the attempt is successful, the user shall be prompted to determine whether the -R option should be set. -r rejectfile Override the default reject filename. In the default case, the reject file shall have the same name as the output file, with the suffix .rej appended to it; see Patch Application. -u Interpret the patch file as a unified context difference (the output of the diff utility when the -u or -U options are specified).
# patch > Patch a file (or files) with a diff file. Note that diff files should be > generated by the `diff` command. More information: https://manned.org/patch. * Apply a patch using a diff file (filenames must be included in the diff file): `patch < {{patch.diff}}` * Apply a patch to a specific file: `patch {{path/to/file}} < {{patch.diff}}` * Patch a file writing the result to a different file: `patch {{path/to/input_file}} -o {{path/to/output_file}} < {{patch.diff}}` * Apply a patch to the current directory: `patch -p1 < {{patch.diff}}` * Apply the reverse of a patch: `patch -R < {{patch.diff}}`
find
This manual page documents the GNU version of find. GNU find searches the directory tree rooted at each given starting-point by evaluating the given expression from left to right, according to the rules of precedence (see section OPERATORS), until the outcome is known (the left hand side is false for and operations, true for or), at which point find moves on to the next file name. If no starting-point is specified, `.' is assumed. If you are using find in an environment where security is important (for example if you are using it to search directories that are writable by other users), you should read the `Security Considerations' chapter of the findutils documentation, which is called Finding Files and comes with findutils. That document also includes a lot more detail and discussion than this manual page, so you may find it a more useful source of information. The -H, -L and -P options control the treatment of symbolic links. Command-line arguments following these are taken to be names of files or directories to be examined, up to the first argument that begins with `-', or the argument `(' or `!'. That argument and any following arguments are taken to be the expression describing what is to be searched for. If no paths are given, the current directory is used. If no expression is given, the expression -print is used (but you should probably consider using -print0 instead, anyway). This manual page talks about `options' within the expression list. These options control the behaviour of find but are specified immediately after the last path name. The five `real' options -H, -L, -P, -D and -O must appear before the first path name, if at all. A double dash -- could theoretically be used to signal that any remaining arguments are not options, but this does not really work due to the way find determines the end of the following path arguments: it does that by reading until an expression argument comes (which also starts with a `-'). Now, if a path argument would start with a `-', then find would treat it as expression argument instead. Thus, to ensure that all start points are taken as such, and especially to prevent that wildcard patterns expanded by the calling shell are not mistakenly treated as expression arguments, it is generally safer to prefix wildcards or dubious path names with either `./' or to use absolute path names starting with '/'. Alternatively, it is generally safe though non-portable to use the GNU option -files0-from to pass arbitrary starting points to find. -P Never follow symbolic links. This is the default behaviour. When find examines or prints information about files, and the file is a symbolic link, the information used shall be taken from the properties of the symbolic link itself. -L Follow symbolic links. When find examines or prints information about files, the information used shall be taken from the properties of the file to which the link points, not from the link itself (unless it is a broken symbolic link or find is unable to examine the file to which the link points). Use of this option implies -noleaf. If you later use the -P option, -noleaf will still be in effect. If -L is in effect and find discovers a symbolic link to a subdirectory during its search, the subdirectory pointed to by the symbolic link will be searched. When the -L option is in effect, the -type predicate will always match against the type of the file that a symbolic link points to rather than the link itself (unless the symbolic link is broken). Actions that can cause symbolic links to become broken while find is executing (for example -delete) can give rise to confusing behaviour. Using -L causes the -lname and -ilname predicates always to return false. -H Do not follow symbolic links, except while processing the command line arguments. When find examines or prints information about files, the information used shall be taken from the properties of the symbolic link itself. The only exception to this behaviour is when a file specified on the command line is a symbolic link, and the link can be resolved. For that situation, the information used is taken from whatever the link points to (that is, the link is followed). The information about the link itself is used as a fallback if the file pointed to by the symbolic link cannot be examined. If -H is in effect and one of the paths specified on the command line is a symbolic link to a directory, the contents of that directory will be examined (though of course -maxdepth 0 would prevent this). If more than one of -H, -L and -P is specified, each overrides the others; the last one appearing on the command line takes effect. Since it is the default, the -P option should be considered to be in effect unless either -H or -L is specified. GNU find frequently stats files during the processing of the command line itself, before any searching has begun. These options also affect how those arguments are processed. Specifically, there are a number of tests that compare files listed on the command line against a file we are currently considering. In each case, the file specified on the command line will have been examined and some of its properties will have been saved. If the named file is in fact a symbolic link, and the -P option is in effect (or if neither -H nor -L were specified), the information used for the comparison will be taken from the properties of the symbolic link. Otherwise, it will be taken from the properties of the file the link points to. If find cannot follow the link (for example because it has insufficient privileges or the link points to a nonexistent file) the properties of the link itself will be used. When the -H or -L options are in effect, any symbolic links listed as the argument of -newer will be dereferenced, and the timestamp will be taken from the file to which the symbolic link points. The same consideration applies to -newerXY, -anewer and -cnewer. The -follow option has a similar effect to -L, though it takes effect at the point where it appears (that is, if -L is not used but -follow is, any symbolic links appearing after -follow on the command line will be dereferenced, and those before it will not). -D debugopts Print diagnostic information; this can be helpful to diagnose problems with why find is not doing what you want. The list of debug options should be comma separated. Compatibility of the debug options is not guaranteed between releases of findutils. For a complete list of valid debug options, see the output of find -D help. Valid debug options include exec Show diagnostic information relating to -exec, -execdir, -ok and -okdir opt Prints diagnostic information relating to the optimisation of the expression tree; see the -O option. rates Prints a summary indicating how often each predicate succeeded or failed. search Navigate the directory tree verbosely. stat Print messages as files are examined with the stat and lstat system calls. The find program tries to minimise such calls. tree Show the expression tree in its original and optimised form. all Enable all of the other debug options (but help). help Explain the debugging options. -Olevel Enables query optimisation. The find program reorders tests to speed up execution while preserving the overall effect; that is, predicates with side effects are not reordered relative to each other. The optimisations performed at each optimisation level are as follows. 0 Equivalent to optimisation level 1. 1 This is the default optimisation level and corresponds to the traditional behaviour. Expressions are reordered so that tests based only on the names of files (for example -name and -regex) are performed first. 2 Any -type or -xtype tests are performed after any tests based only on the names of files, but before any tests that require information from the inode. On many modern versions of Unix, file types are returned by readdir() and so these predicates are faster to evaluate than predicates which need to stat the file first. If you use the -fstype FOO predicate and specify a filesystem type FOO which is not known (that is, present in `/etc/mtab') at the time find starts, that predicate is equivalent to -false. 3 At this optimisation level, the full cost-based query optimiser is enabled. The order of tests is modified so that cheap (i.e. fast) tests are performed first and more expensive ones are performed later, if necessary. Within each cost band, predicates are evaluated earlier or later according to whether they are likely to succeed or not. For -o, predicates which are likely to succeed are evaluated earlier, and for -a, predicates which are likely to fail are evaluated earlier. The cost-based optimiser has a fixed idea of how likely any given test is to succeed. In some cases the probability takes account of the specific nature of the test (for example, -type f is assumed to be more likely to succeed than -type c). The cost-based optimiser is currently being evaluated. If it does not actually improve the performance of find, it will be removed again. Conversely, optimisations that prove to be reliable, robust and effective may be enabled at lower optimisation levels over time. However, the default behaviour (i.e. optimisation level 1) will not be changed in the 4.3.x release series. The findutils test suite runs all the tests on find at each optimisation level and ensures that the result is the same.
# find > Find files or directories under the given directory tree, recursively. More > information: https://manned.org/find. * Find files by extension: `find {{root_path}} -name '{{*.ext}}'` * Find files matching multiple path/name patterns: `find {{root_path}} -path '{{**/path/**/*.ext}}' -or -name '{{*pattern*}}'` * Find directories matching a given name, in case-insensitive mode: `find {{root_path}} -type d -iname '{{*lib*}}'` * Find files matching a given pattern, excluding specific paths: `find {{root_path}} -name '{{*.py}}' -not -path '{{*/site-packages/*}}'` * Find files matching a given size range, limiting the recursive depth to "1": `find {{root_path}} -maxdepth 1 -size {{+500k}} -size {{-10M}}` * Run a command for each file (use `{}` within the command to access the filename): `find {{root_path}} -name '{{*.ext}}' -exec {{wc -l {} }}\;` * Find files modified in the last 7 days: `find {{root_path}} -daystart -mtime -{{7}}` * Find empty (0 byte) files and delete them: `find {{root_path}} -type {{f}} -empty -delete`
expect
Expect is a program that "talks" to other interactive programs according to a script. Following the script, Expect knows what can be expected from a program and what the correct response should be. An interpreted language provides branching and high- level control structures to direct the dialogue. In addition, the user can take control and interact directly when desired, afterward returning control to the script. Expectk is a mixture of Expect and Tk. It behaves just like Expect and Tk's wish. Expect can also be used directly in C or C++ (that is, without Tcl). See libexpect(3). The name "Expect" comes from the idea of send/expect sequences popularized by uucp, kermit and other modem control programs. However unlike uucp, Expect is generalized so that it can be run as a user-level command with any program and task in mind. Expect can actually talk to several programs at the same time. For example, here are some things Expect can do: • Cause your computer to dial you back, so that you can login without paying for the call. • Start a game (e.g., rogue) and if the optimal configuration doesn't appear, restart it (again and again) until it does, then hand over control to you. • Run fsck, and in response to its questions, answer "yes", "no" or give control back to you, based on predetermined criteria. • Connect to another network or BBS (e.g., MCI Mail, CompuServe) and automatically retrieve your mail so that it appears as if it was originally sent to your local system. • Carry environment variables, current directory, or any kind of information across rlogin, telnet, tip, su, chgrp, etc. There are a variety of reasons why the shell cannot perform these tasks. (Try, you'll see.) All are possible with Expect. In general, Expect is useful for running any program which requires interaction between the program and the user. All that is necessary is that the interaction can be characterized programmatically. Expect can also give the user back control (without halting the program being controlled) if desired. Similarly, the user can return control to the script at any time.
# expect > Script executor that interacts with other programs that require user input. > More information: https://manned.org/expect. * Execute an expect script from a file: `expect {{path/to/file}}` * Execute a specified expect script: `expect -c "{{commands}}"` * Enter an interactive REPL (use `exit` or Ctrl + D to exit): `expect -i`
du
By default, the du utility shall write to standard output the size of the file space allocated to, and the size of the file space allocated to each subdirectory of, the file hierarchy rooted in each of the specified files. By default, when a symbolic link is encountered on the command line or in the file hierarchy, du shall count the size of the symbolic link (rather than the file referenced by the link), and shall not follow the link to another portion of the file hierarchy. The size of the file space allocated to a file of type directory shall be defined as the sum total of space allocated to all files in the file hierarchy rooted in the directory plus the space allocated to the directory itself. When du cannot stat() files or stat() or read directories, it shall report an error condition and the final exit status is affected. A file that occurs multiple times under one file operand and that has a link count greater than 1 shall be counted and written for only one entry. It is implementation-defined whether a file that has a link count no greater than 1 is counted and written just once, or is counted and written for each occurrence. It is implementation-defined whether a file that occurs under one file operand is counted for other file operands. The directory entry that is selected in the report is unspecified. By default, file sizes shall be written in 512-byte units, rounded up to the next 512-byte unit. The du utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following options shall be supported: -a In addition to the default output, report the size of each file not of type directory in the file hierarchy rooted in the specified file. The -a option shall not affect whether non-directories given as file operands are listed. -H If a symbolic link is specified on the command line, du shall count the size of the file or file hierarchy referenced by the link. -k Write the files sizes in units of 1024 bytes, rather than the default 512-byte units. -L If a symbolic link is specified on the command line or encountered during the traversal of a file hierarchy, du shall count the size of the file or file hierarchy referenced by the link. -s Instead of the default output, report only the total sum for each of the specified files. -x When evaluating file sizes, evaluate only those files that have the same device as the file specified by the file operand. Specifying more than one of the mutually-exclusive options -H and -L shall not be considered an error. The last option specified shall determine the behavior of the utility.
# du > Disk usage: estimate and summarize file and directory space usage. More > information: https://ss64.com/osx/du.html. * List the sizes of a directory and any subdirectories, in the given unit (KiB/MiB/GiB): `du -{{k|m|g}} {{path/to/directory}}` * List the sizes of a directory and any subdirectories, in human-readable form (i.e. auto-selecting the appropriate unit for each size): `du -h {{path/to/directory}}` * Show the size of a single directory, in human-readable units: `du -sh {{path/to/directory}}` * List the human-readable sizes of a directory and of all the files and directories within it: `du -ah {{path/to/directory}}` * List the human-readable sizes of a directory and any subdirectories, up to N levels deep: `du -h -d {{2}} {{path/to/directory}}` * List the human-readable size of all `.jpg` files in subdirectories of the current directory, and show a cumulative total at the end: `du -ch {{*/*.jpg}}`
fold
The fold utility is a filter that shall fold lines from its input files, breaking the lines to have a maximum of width column positions (or bytes, if the -b option is specified). Lines shall be broken by the insertion of a <newline> such that each output line (referred to later in this section as a segment) is the maximum width possible that does not exceed the specified number of column positions (or bytes). A line shall not be broken in the middle of a character. The behavior is undefined if width is less than the number of columns any single character in the input would occupy. If the <carriage-return>, <backspace>, or <tab> characters are encountered in the input, and the -b option is not specified, they shall be treated specially: <backspace> The current count of line width shall be decremented by one, although the count never shall become negative. The fold utility shall not insert a <newline> immediately before or after any <backspace>, unless the following character has a width greater than 1 and would cause the line width to exceed width. <carriage-return> The current count of line width shall be set to zero. The fold utility shall not insert a <newline> immediately before or after any <carriage-return>. <tab> Each <tab> encountered shall advance the column position pointer to the next tab stop. Tab stops shall be at each column position n such that n modulo 8 equals 1. The fold utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following options shall be supported: -b Count width in bytes rather than column positions. -s If a segment of a line contains a <blank> within the first width column positions (or bytes), break the line after the last such <blank> meeting the width constraints. If there is no <blank> meeting the requirements, the -s option shall have no effect for that output segment of the input line. -w width Specify the maximum line length, in column positions (or bytes if -b is specified). The results are unspecified if width is not a positive decimal number. The default value shall be 80.
# fold > Wrap each line in an input file to fit a specified width and print it to > `stdout`. More information: https://manned.org/fold.1p. * Wrap each line to default width (80 characters): `fold {{path/to/file}}` * Wrap each line to width "30": `fold -w30 {{path/to/file}}` * Wrap each line to width "5" and break the line at spaces (puts each space separated word in a new line, words with length > 5 are wrapped): `fold -w5 -s {{path/to/file}}`
nohup
The nohup utility shall invoke the utility named by the utility operand with arguments supplied as the argument operands. At the time the named utility is invoked, the SIGHUP signal shall be set to be ignored. If standard input is associated with a terminal, the nohup utility may redirect standard input from an unspecified file. If the standard output is a terminal, all output written by the named utility to its standard output shall be appended to the end of the file nohup.out in the current directory. If nohup.out cannot be created or opened for appending, the output shall be appended to the end of the file nohup.out in the directory specified by the HOME environment variable. If neither file can be created or opened for appending, utility shall not be invoked. If a file is created, the file's permission bits shall be set to S_IRUSR | S_IWUSR. If standard error is a terminal and standard output is open but is not a terminal, all output written by the named utility to its standard error shall be redirected to the same open file description as the standard output. If standard error is a terminal and standard output either is a terminal or is closed, the same output shall instead be appended to the end of the nohup.out file as described above. None.
# nohup > Allows for a process to live when the terminal gets killed. More > information: https://www.gnu.org/software/coreutils/nohup. * Run a process that can live beyond the terminal: `nohup {{command}} {{argument1 argument2 ...}}` * Launch `nohup` in background mode: `nohup {{command}} {{argument1 argument2 ...}} &` * Run a shell script that can live beyond the terminal: `nohup {{path/to/script.sh}} &` * Run a process and write the output to a specific file: `nohup {{command}} {{argument1 argument2 ...}} > {{path/to/output_file}} &`
git-rm
Remove files matching pathspec from the index, or from the working tree and the index. git rm will not remove a file from just your working directory. (There is no option to remove a file only from the working tree and yet keep it in the index; use /bin/rm if you want to do that.) The files being removed have to be identical to the tip of the branch, and no updates to their contents can be staged in the index, though that default behavior can be overridden with the -f option. When --cached is given, the staged content has to match either the tip of the branch or the file on disk, allowing the file to be removed from just the index. When sparse-checkouts are in use (see git-sparse-checkout(1)), git rm will only remove paths within the sparse-checkout patterns. <pathspec>... Files to remove. A leading directory name (e.g. dir to remove dir/file1 and dir/file2) can be given to remove all files in the directory, and recursively all sub-directories, but this requires the -r option to be explicitly given. The command removes only the paths that are known to Git. File globbing matches across directory boundaries. Thus, given two directories d and d2, there is a difference between using git rm 'd*' and git rm 'd/*', as the former will also remove all of directory d2. For more details, see the pathspec entry in gitglossary(7). -f, --force Override the up-to-date check. -n, --dry-run Don’t actually remove any file(s). Instead, just show if they exist in the index and would otherwise be removed by the command. -r Allow recursive removal when a leading directory name is given. -- This option can be used to separate command-line options from the list of files, (useful when filenames might be mistaken for command-line options). --cached Use this option to unstage and remove paths only from the index. Working tree files, whether modified or not, will be left alone. --ignore-unmatch Exit with a zero status even if no files matched. --sparse Allow updating index entries outside of the sparse-checkout cone. Normally, git rm refuses to update index entries whose paths do not fit within the sparse-checkout cone. See git-sparse-checkout(1) for more. -q, --quiet git rm normally outputs one line (in the form of an rm command) for each file removed. This option suppresses that output. --pathspec-from-file=<file> Pathspec is passed in <file> instead of commandline args. If <file> is exactly - then standard input is used. Pathspec elements are separated by LF or CR/LF. Pathspec elements can be quoted as explained for the configuration variable core.quotePath (see git-config(1)). See also --pathspec-file-nul and global --literal-pathspecs. --pathspec-file-nul Only meaningful with --pathspec-from-file. Pathspec elements are separated with NUL character and all other characters are taken literally (including newlines and quotes).
# git rm > Remove files from repository index and local filesystem. More information: > https://git-scm.com/docs/git-rm. * Remove file from repository index and filesystem: `git rm {{path/to/file}}` * Remove directory: `git rm -r {{path/to/directory}}` * Remove file from repository index but keep it untouched locally: `git rm --cached {{path/to/file}}`