Command
stringlengths
1
20
Text
stringlengths
86
185k
Summary
stringlengths
101
1.77k
git-var
Prints a Git logical variable. Exits with code 1 if the variable has no value. -l Cause the logical variables to be listed. In addition, all the variables of the Git configuration file .git/config are listed as well. (However, the configuration variables listing functionality is deprecated in favor of git config -l.)
# git var > Prints a Git logical variable's value. See `git config`, which is preferred > over `git var`. More information: https://git-scm.com/docs/git-var. * Print the value of a Git logical variable: `git var {{GIT_AUTHOR_IDENT|GIT_COMMITTER_IDENT|GIT_EDITOR|GIT_PAGER}}` * [l]ist all Git logical variables: `git var -l`
make
The make utility will determine automatically which pieces of a large program need to be recompiled, and issue the commands to recompile them. The manual describes the GNU implementation of make, which was written by Richard Stallman and Roland McGrath, and is currently maintained by Paul Smith. Our examples show C programs, since they are very common, but you can use make with any programming language whose compiler can be run with a shell command. In fact, make is not limited to programs. You can use it to describe any task where some files must be updated automatically from others whenever the others change. To prepare to use make, you must write a file called the makefile that describes the relationships among files in your program, and provides commands for updating each file. In a program, typically the executable file is updated from object files, which are in turn made by compiling source files. Once a suitable makefile exists, each time you change some source files, this simple shell command: make suffices to perform all necessary recompilations. The make program uses the makefile description and the last-modification times of the files to decide which of the files need to be updated. For each of those files, it issues the commands recorded in the makefile. make executes commands in the makefile to update one or more targets, where target is typically a program. If no -f option is present, make will look for the makefiles GNUmakefile, makefile, and Makefile, in that order. Normally you should call your makefile either makefile or Makefile. (We recommend Makefile because it appears prominently near the beginning of a directory listing, right near other important files such as README.) The first name checked, GNUmakefile, is not recommended for most makefiles. You should use this name if you have a makefile that is specific to GNU make, and will not be understood by other versions of make. If makefile is '-', the standard input is read. make updates a target if it depends on prerequisite files that have been modified since the target was last modified, or if the target does not exist. -b, -m These options are ignored for compatibility with other versions of make. -B, --always-make Unconditionally make all targets. -C dir, --directory=dir Change to directory dir before reading the makefiles or doing anything else. If multiple -C options are specified, each is interpreted relative to the previous one: -C / -C etc is equivalent to -C /etc. This is typically used with recursive invocations of make. -d Print debugging information in addition to normal processing. The debugging information says which files are being considered for remaking, which file-times are being compared and with what results, which files actually need to be remade, which implicit rules are considered and which are applied---everything interesting about how make decides what to do. --debug[=FLAGS] Print debugging information in addition to normal processing. If the FLAGS are omitted, then the behavior is the same as if -d was specified. FLAGS may be any or all of the following names, comma- or space-separated. Only the first character is significant: the rest may be omitted: all for all debugging output (same as using -d), basic for basic debugging, verbose for more verbose basic debugging, implicit for showing implicit rule search operations, jobs for details on invocation of commands, makefile for debugging while remaking makefiles, print shows all recipes that are run even if they are silent, and why shows the reason make decided to rebuild each target. Use none to disable all previous debugging flags. -e, --environment-overrides Give variables taken from the environment precedence over variables from makefiles. -E string, --eval string Interpret string using the eval function, before parsing any makefiles. -f file, --file=file, --makefile=FILE Use file as a makefile. -i, --ignore-errors Ignore all errors in commands executed to remake files. -I dir, --include-dir=dir Specifies a directory dir to search for included makefiles. If several -I options are used to specify several directories, the directories are searched in the order specified. Unlike the arguments to other flags of make, directories given with -I flags may come directly after the flag: -Idir is allowed, as well as -I dir. This syntax is allowed for compatibility with the C preprocessor's -I flag. -j [jobs], --jobs[=jobs] Specifies the number of jobs (commands) to run simultaneously. If there is more than one -j option, the last one is effective. If the -j option is given without an argument, make will not limit the number of jobs that can run simultaneously. --jobserver-style=style The style of jobserver to use. The style may be one of fifo, pipe, or sem (Windows only). -k, --keep-going Continue as much as possible after an error. While the target that failed, and those that depend on it, cannot be remade, the other dependencies of these targets can be processed all the same. -l [load], --load-average[=load] Specifies that no new jobs (commands) should be started if there are others jobs running and the load average is at least load (a floating-point number). With no argument, removes a previous load limit. -L, --check-symlink-times Use the latest mtime between symlinks and target. -n, --just-print, --dry-run, --recon Print the commands that would be executed, but do not execute them (except in certain circumstances). -o file, --old-file=file, --assume-old=file Do not remake the file file even if it is older than its dependencies, and do not remake anything on account of changes in file. Essentially the file is treated as very old and its rules are ignored. -O[type], --output-sync[=type] When running multiple jobs in parallel with -j, ensure the output of each job is collected together rather than interspersed with output from other jobs. If type is not specified or is target the output from the entire recipe for each target is grouped together. If type is line the output from each command line within a recipe is grouped together. If type is recurse output from an entire recursive make is grouped together. If type is none output synchronization is disabled. -p, --print-data-base Print the data base (rules and variable values) that results from reading the makefiles; then execute as usual or as otherwise specified. This also prints the version information given by the -v switch (see below). To print the data base without trying to remake any files, use make -p -f/dev/null. -q, --question ``Question mode''. Do not run any commands, or print anything; just return an exit status that is zero if the specified targets are already up to date, nonzero otherwise. -r, --no-builtin-rules Eliminate use of the built-in implicit rules. Also clear out the default list of suffixes for suffix rules. -R, --no-builtin-variables Don't define any built-in variables. -s, --silent, --quiet Silent operation; do not print the commands as they are executed. --no-silent Cancel the effect of the -s option. -S, --no-keep-going, --stop Cancel the effect of the -k option. -t, --touch Touch files (mark them up to date without really changing them) instead of running their commands. This is used to pretend that the commands were done, in order to fool future invocations of make. --trace Information about the disposition of each target is printed (why the target is being rebuilt and what commands are run to rebuild it). -v, --version Print the version of the make program plus a copyright, a list of authors and a notice that there is no warranty. -w, --print-directory Print a message containing the working directory before and after other processing. This may be useful for tracking down errors from complicated nests of recursive make commands. --no-print-directory Turn off -w, even if it was turned on implicitly. --shuffle[=MODE] Enable shuffling of goal and prerequisite ordering. MODE is one of none to disable shuffle mode, random to shuffle prerequisites in random order, reverse to consider prerequisites in reverse order, or an integer <seed> which enables random mode with a specific seed value. If MODE is omitted the default is random. -W file, --what-if=file, --new-file=file, --assume-new=file Pretend that the target file has just been modified. When used with the -n flag, this shows you what would happen if you were to modify that file. Without -n, it is almost the same as running a touch command on the given file before running make, except that the modification time is changed only in the imagination of make. --warn-undefined-variables Warn when an undefined variable is referenced.
# make > Task runner for targets described in Makefile. Mostly used to control the > compilation of an executable from source code. More information: > https://www.gnu.org/software/make/manual/make.html. * Call the first target specified in the Makefile (usually named "all"): `make` * Call a specific target: `make {{target}}` * Call a specific target, executing 4 jobs at a time in parallel: `make -j{{4}} {{target}}` * Use a specific Makefile: `make --file {{path/to/file}}` * Execute make from another directory: `make --directory {{path/to/directory}}` * Force making of a target, even if source files are unchanged: `make --always-make {{target}}` * Override a variable defined in the Makefile: `make {{target}} {{variable}}={{new_value}}` * Override variables defined in the Makefile by the environment: `make --environment-overrides {{target}}`
uudecode
The uudecode utility shall read a file, or standard input if no file is specified, that includes data created by the uuencode utility. The uudecode utility shall scan the input file, searching for data compatible with one of the formats specified in uuencode, and attempt to create or overwrite the file described by the data (or overridden by the -o option). The pathname shall be contained in the data or specified by the -o option. The file access permission bits and contents for the file to be produced shall be contained in that data. The mode bits of the created file (other than standard output) shall be set from the file access permission bits contained in the data; that is, other attributes of the mode, including the file mode creation mask (see umask), shall not affect the file being produced. If either of the op characters '+' and '-' (see chmod) are specified in symbolic mode, the initial mode on which those operations are based is unspecified. If the pathname of the file resolves to an existing file and the user does not have write permission on that file, uudecode shall terminate with an error. If the pathname of the file resolves to an existing file and the user has write permission on that file, the existing file shall be overwritten and, if possible, the mode bits of the file (other than standard output) shall be set as described above; if the mode bits cannot be set, uudecode shall not treat this as an error. If the input data was produced by uuencode on a system with a different number of bits per byte than on the target system, the results of uudecode are unspecified. The uudecode utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following option shall be supported by the implementation: -o outfile A pathname of a file that shall be used instead of any pathname contained in the input data. Specifying an outfile option-argument of /dev/stdout shall indicate standard output.
# uudecode > Decode files encoded by `uuencode`. More information: > https://manned.org/uudecode. * Decode a file that was encoded with `uuencode` and print the result to `stdout`: `uudecode {{path/to/encoded_file}}` * Decode a file that was encoded with `uuencode` and write the result to a file: `uudecode -o {{path/to/decoded_file}} {{path/to/encoded_file}}`
diff
The diff utility shall compare the contents of file1 and file2 and write to standard output a list of changes necessary to convert file1 into file2. This list should be minimal. No output shall be produced if the files are identical. The diff utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following options shall be supported: -b Cause any amount of white space at the end of a line to be treated as a single <newline> (that is, the white- space characters preceding the <newline> are ignored) and other strings of white-space characters, not including <newline> characters, to compare equal. -c Produce output in a form that provides three lines of copied context. -C n Produce output in a form that provides n lines of copied context (where n shall be interpreted as a positive decimal integer). -e Produce output in a form suitable as input for the ed utility, which can then be used to convert file1 into file2. -f Produce output in an alternative form, similar in format to -e, but not intended to be suitable as input for the ed utility, and in the opposite order. -r Apply diff recursively to files and directories of the same name when file1 and file2 are both directories. The diff utility shall detect infinite loops; that is, entering a previously visited directory that is an ancestor of the last file encountered. When it detects an infinite loop, diff shall write a diagnostic message to standard error and shall either recover its position in the hierarchy or terminate. -u Produce output in a form that provides three lines of unified context. -U n Produce output in a form that provides n lines of unified context (where n shall be interpreted as a non- negative decimal integer).
# diff > Compare files and directories. More information: https://man7.org/linux/man- > pages/man1/diff.1.html. * Compare files (lists changes to turn `old_file` into `new_file`): `diff {{old_file}} {{new_file}}` * Compare files, ignoring white spaces: `diff --ignore-all-space {{old_file}} {{new_file}}` * Compare files, showing the differences side by side: `diff --side-by-side {{old_file}} {{new_file}}` * Compare files, showing the differences in unified format (as used by `git diff`): `diff --unified {{old_file}} {{new_file}}` * Compare directories recursively (shows names for differing files/directories as well as changes made to files): `diff --recursive {{old_directory}} {{new_directory}}` * Compare directories, only showing the names of files that differ: `diff --recursive --brief {{old_directory}} {{new_directory}}` * Create a patch file for Git from the differences of two text files, treating nonexistent files as empty: `diff --text --unified --new-file {{old_file}} {{new_file}} > {{diff.patch}}`
ln
In the first synopsis form, the ln utility shall create a new directory entry (link) at the destination path specified by the target_file operand. If the -s option is specified, a symbolic link shall be created for the file specified by the source_file operand. This first synopsis form shall be assumed when the final operand does not name an existing directory; if more than two operands are specified and the final is not an existing directory, an error shall result. In the second synopsis form, the ln utility shall create a new directory entry (link), or if the -s option is specified a symbolic link, for each file specified by a source_file operand, at a destination path in the existing directory named by target_dir. If the last operand specifies an existing file of a type not specified by the System Interfaces volume of POSIX.1‐2017, the behavior is implementation-defined. The corresponding destination path for each source_file shall be the concatenation of the target directory pathname, a <slash> character if the target directory pathname did not end in a <slash>, and the last pathname component of the source_file. The second synopsis form shall be assumed when the final operand names an existing directory. For each source_file: 1. If the destination path exists and was created by a previous step, it is unspecified whether ln shall write a diagnostic message to standard error, do nothing more with the current source_file, and go on to any remaining source_files; or will continue processing the current source_file. If the destination path exists: a. If the -f option is not specified, ln shall write a diagnostic message to standard error, do nothing more with the current source_file, and go on to any remaining source_files. b. If the destination path names the same directory entry as the current source_file ln shall write a diagnostic message to standard error, do nothing more with the current source_file, and go on to any remaining source_files. c. Actions shall be performed equivalent to the unlink() function defined in the System Interfaces volume of POSIX.1‐2017, called using the destination path as the path argument. If this fails for any reason, ln shall write a diagnostic message to standard error, do nothing more with the current source_file, and go on to any remaining source_files. 2. If the -s option is specified, actions shall be performed equivalent to the symlink() function with source_file as the path1 argument and the destination path as the path2 argument. The ln utility shall do nothing more with source_file and shall go on to any remaining files. 3. If source_file is a symbolic link: a. If the -P option is in effect, actions shall be performed equivalent to the linkat() function with source_file as the path1 argument, the destination path as the path2 argument, AT_FDCWD as the fd1 and fd2 arguments, and zero as the flag argument. b. If the -L option is in effect, actions shall be performed equivalent to the linkat() function with source_file as the path1 argument, the destination path as the path2 argument, AT_FDCWD as the fd1 and fd2 arguments, and AT_SYMLINK_FOLLOW as the flag argument. The ln utility shall do nothing more with source_file and shall go on to any remaining files. 4. Actions shall be performed equivalent to the link() function defined in the System Interfaces volume of POSIX.1‐2017 using source_file as the path1 argument, and the destination path as the path2 argument. The ln utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following options shall be supported: -f Force existing destination pathnames to be removed to allow the link. -L For each source_file operand that names a file of type symbolic link, create a (hard) link to the file referenced by the symbolic link. -P For each source_file operand that names a file of type symbolic link, create a (hard) link to the symbolic link itself. -s Create symbolic links instead of hard links. If the -s option is specified, the -L and -P options shall be silently ignored. Specifying more than one of the mutually-exclusive options -L and -P shall not be considered an error. The last option specified shall determine the behavior of the utility (unless the -s option causes it to be ignored). If the -s option is not specified and neither a -L nor a -P option is specified, it is implementation-defined which of the -L and -P options will be used as the default.
# ln > Creates links to files and directories. More information: > https://www.gnu.org/software/coreutils/ln. * Create a symbolic link to a file or directory: `ln -s {{/path/to/file_or_directory}} {{path/to/symlink}}` * Overwrite an existing symbolic link to point to a different file: `ln -sf {{/path/to/new_file}} {{path/to/symlink}}` * Create a hard link to a file: `ln {{/path/to/file}} {{path/to/hardlink}}`
cal
cal displays a simple calendar. If no arguments are specified, the current month is displayed. The month may be specified as a number (1-12), as a month name or as an abbreviated month name according to the current locales. Two different calendar systems are used, Gregorian and Julian. These are nearly identical systems with Gregorian making a small adjustment to the frequency of leap years; this facilitates improved synchronization with solar events like the equinoxes. The Gregorian calendar reform was introduced in 1582, but its adoption continued up to 1923. By default cal uses the adoption date of 3 Sept 1752. From that date forward the Gregorian calendar is displayed; previous dates use the Julian calendar system. 11 days were removed at the time of adoption to bring the calendar in sync with solar events. So Sept 1752 has a mix of Julian and Gregorian dates by which the 2nd is followed by the 14th (the 3rd through the 13th are absent). Optionally, either the proleptic Gregorian calendar or the Julian calendar may be used exclusively. See --reform below. -1, --one Display single month output. (This is the default.) -3, --three Display three months spanning the date. -n , --months number Display number of months, starting from the month containing the date. -S, --span Display months spanning the date. -s, --sunday Display Sunday as the first day of the week. -m, --monday Display Monday as the first day of the week. -v, --vertical Display using a vertical layout (aka ncal(1) mode). --iso Display the proleptic Gregorian calendar exclusively. This option does not affect week numbers and the first day of the week. See --reform below. -j, --julian Use day-of-year numbering for all calendars. These are also called ordinal days. Ordinal days range from 1 to 366. This option does not switch from the Gregorian to the Julian calendar system, that is controlled by the --reform option. Sometimes Gregorian calendars using ordinal dates are referred to as Julian calendars. This can be confusing due to the many date related conventions that use Julian in their name: (ordinal) julian date, julian (calendar) date, (astronomical) julian date, (modified) julian date, and more. This option is named julian, because ordinal days are identified as julian by the POSIX standard. However, be aware that cal also uses the Julian calendar system. See DESCRIPTION above. --reform val This option sets the adoption date of the Gregorian calendar reform. Calendar dates previous to reform use the Julian calendar system. Calendar dates after reform use the Gregorian calendar system. The argument val can be: • 1752 - sets 3 September 1752 as the reform date (default). This is when the Gregorian calendar reform was adopted by the British Empire. • gregorian - display Gregorian calendars exclusively. This special placeholder sets the reform date below the smallest year that cal can use; meaning all calendar output uses the Gregorian calendar system. This is called the proleptic Gregorian calendar, because dates prior to the calendar system’s creation use extrapolated values. • iso - alias of gregorian. The ISO 8601 standard for the representation of dates and times in information interchange requires using the proleptic Gregorian calendar. • julian - display Julian calendars exclusively. This special placeholder sets the reform date above the largest year that cal can use; meaning all calendar output uses the Julian calendar system. See DESCRIPTION above. -y, --year Display a calendar for the whole year. -Y, --twelve Display a calendar for the next twelve months. -w, --week[=number] Display week numbers in the calendar (US or ISO-8601). See the NOTES section for more details. --color[=when] Colorize the output. The optional argument when can be auto, never or always. If the when argument is omitted, it defaults to auto. The colors can be disabled; for the current built-in default see the --help output. See also the COLORS section. -c, --columns=columns Number of columns to use. auto uses as many as fit the terminal. -h, --help Display help text and exit. -V, --version Print version and exit.
# cal > Prints calendar information. More information: > https://ss64.com/osx/cal.html. * Display a calendar for the current month: `cal` * Display previous, current and next month: `cal -3` * Display a calendar for a specific month (1-12 or name): `cal -m {{month}}` * Display a calendar for the current year: `cal -y` * Display a calendar for a specific year (4 digits): `cal {{year}}` * Display a calendar for a specific month and year: `cal {{month}} {{year}}` * Display date of Easter (Western Christian churches) in a given year: `ncal -e {{year}}`
file
This manual page documents version 5.44 of the file command. file tests each argument in an attempt to classify it. There are three sets of tests, performed in this order: filesystem tests, magic tests, and language tests. The first test that succeeds causes the file type to be printed. The type printed will usually contain one of the words text (the file contains only printing characters and a few common control characters and is probably safe to read on an ASCII terminal), executable (the file contains the result of compiling a program in a form understandable to some UNIX kernel or another), or data meaning anything else (data is usually “binary” or non-printable). Exceptions are well-known file formats (core files, tar archives) that are known to contain binary data. When modifying magic files or the program itself, make sure to preserve these keywords. Users depend on knowing that all the readable files in a directory have the word “text” printed. Don't do as Berkeley did and change “shell commands text” to “shell script”. The filesystem tests are based on examining the return from a stat(2) system call. The program checks to see if the file is empty, or if it's some sort of special file. Any known file types appropriate to the system you are running on (sockets, symbolic links, or named pipes (FIFOs) on those systems that implement them) are intuited if they are defined in the system header file <sys/stat.h>. The magic tests are used to check for files with data in particular fixed formats. The canonical example of this is a binary executable (compiled program) a.out file, whose format is defined in <elf.h>, <a.out.h> and possibly <exec.h> in the standard include directory. These files have a “magic number” stored in a particular place near the beginning of the file that tells the UNIX operating system that the file is a binary executable, and which of several types thereof. The concept of a “magic number” has been applied by extension to data files. Any file with some invariant identifier at a small fixed offset into the file can usually be described in this way. The information identifying these files is read from the compiled magic file /usr/local/share/misc/magic.mgc, or the files in the directory /usr/local/share/misc/magic if the compiled file does not exist. In addition, if $HOME/.magic.mgc or $HOME/.magic exists, it will be used in preference to the system magic files. If a file does not match any of the entries in the magic file, it is examined to see if it seems to be a text file. ASCII, ISO-8859-x, non-ISO 8-bit extended-ASCII character sets (such as those used on Macintosh and IBM PC systems), UTF-8-encoded Unicode, UTF-16-encoded Unicode, and EBCDIC character sets can be distinguished by the different ranges and sequences of bytes that constitute printable text in each set. If a file passes any of these tests, its character set is reported. ASCII, ISO-8859-x, UTF-8, and extended-ASCII files are identified as “text” because they will be mostly readable on nearly any terminal; UTF-16 and EBCDIC are only “character data” because, while they contain text, it is text that will require translation before it can be read. In addition, file will attempt to determine other characteristics of text-type files. If the lines of a file are terminated by CR, CRLF, or NEL, instead of the Unix-standard LF, this will be reported. Files that contain embedded escape sequences or overstriking will also be identified. Once file has determined the character set used in a text-type file, it will attempt to determine in what language the file is written. The language tests look for particular strings (cf. <names.h>) that can appear anywhere in the first few blocks of a file. For example, the keyword .br indicates that the file is most likely a troff(1) input file, just as the keyword struct indicates a C program. These tests are less reliable than the previous two groups, so they are performed last. The language test routines also test for some miscellany (such as tar(1) archives, JSON files). Any file that cannot be identified as having been written in any of the character sets listed above is simply said to be “data”. --apple Causes the file command to output the file type and creator code as used by older MacOS versions. The code consists of eight letters, the first describing the file type, the latter the creator. This option works properly only for file formats that have the apple-style output defined. -b, --brief Do not prepend filenames to output lines (brief mode). -C, --compile Write a magic.mgc output file that contains a pre-parsed version of the magic file or directory. -c, --checking-printout Cause a checking printout of the parsed form of the magic file. This is usually used in conjunction with the -m option to debug a new magic file before installing it. -d Prints internal debugging information to stderr. -E On filesystem errors (file not found etc), instead of handling the error as regular output as POSIX mandates and keep going, issue an error message and exit. -e, --exclude testname Exclude the test named in testname from the list of tests made to determine the file type. Valid test names are: apptype EMX application type (only on EMX). ascii Various types of text files (this test will try to guess the text encoding, irrespective of the setting of the ‘encoding’ option). encoding Different text encodings for soft magic tests. tokens Ignored for backwards compatibility. cdf Prints details of Compound Document Files. compress Checks for, and looks inside, compressed files. csv Checks Comma Separated Value files. elf Prints ELF file details, provided soft magic tests are enabled and the elf magic is found. json Examines JSON (RFC-7159) files by parsing them for compliance. soft Consults magic files. simh Examines SIMH tape files. tar Examines tar files by verifying the checksum of the 512 byte tar header. Excluding this test can provide more detailed content description by using the soft magic method. text A synonym for ‘ascii’. --exclude-quiet Like --exclude but ignore tests that file does not know about. This is intended for compatibility with older versions of file. --extension Print a slash-separated list of valid extensions for the file type found. -F, --separator separator Use the specified string as the separator between the filename and the file result returned. Defaults to ‘:’. -f, --files-from namefile Read the names of the files to be examined from namefile (one per line) before the argument list. Either namefile or at least one filename argument must be present; to test the standard input, use ‘-’ as a filename argument. Please note that namefile is unwrapped and the enclosed filenames are processed when this option is encountered and before any further options processing is done. This allows one to process multiple lists of files with different command line arguments on the same file invocation. Thus if you want to set the delimiter, you need to do it before you specify the list of files, like: “-F @ -f namefile”, instead of: “-f namefile -F @”. -h, --no-dereference This option causes symlinks not to be followed (on systems that support symbolic links). This is the default if the environment variable POSIXLY_CORRECT is not defined. -i, --mime Causes the file command to output mime type strings rather than the more traditional human readable ones. Thus it may say ‘text/plain; charset=us-ascii’ rather than “ASCII text”. --mime-type, --mime-encoding Like -i, but print only the specified element(s). -k, --keep-going Don't stop at the first match, keep going. Subsequent matches will be have the string ‘\012- ’ prepended. (If you want a newline, see the -r option.) The magic pattern with the highest strength (see the -l option) comes first. -l, --list Shows a list of patterns and their strength sorted descending by magic(4) strength which is used for the matching (see also the -k option). -L, --dereference This option causes symlinks to be followed, as the like- named option in ls(1) (on systems that support symbolic links). This is the default if the environment variable POSIXLY_CORRECT is defined. -m, --magic-file magicfiles Specify an alternate list of files and directories containing magic. This can be a single item, or a colon- separated list. If a compiled magic file is found alongside a file or directory, it will be used instead. -N, --no-pad Don't pad filenames so that they align in the output. -n, --no-buffer Force stdout to be flushed after checking each file. This is only useful if checking a list of files. It is intended to be used by programs that want filetype output from a pipe. -p, --preserve-date On systems that support utime(3) or utimes(2), attempt to preserve the access time of files analyzed, to pretend that file never read them. -P, --parameter name=value Set various parameter limits. Name Default Explanation bytes 1M max number of bytes to read from file elf_notes 256 max ELF notes processed elf_phnum 2K max ELF program sections processed elf_shnum 32K max ELF sections processed elf_shsize 128MB max ELF section size processed encoding 65K max number of bytes to determine encoding indir 50 recursion limit for indirect magic name 50 use count limit for name/use magic regex 8K length limit for regex searches -r, --raw Don't translate unprintable characters to \ooo. Normally file translates unprintable characters to their octal representation. -s, --special-files Normally, file only attempts to read and determine the type of argument files which stat(2) reports are ordinary files. This prevents problems, because reading special files may have peculiar consequences. Specifying the -s option causes file to also read argument files which are block or character special files. This is useful for determining the filesystem types of the data in raw disk partitions, which are block special files. This option also causes file to disregard the file size as reported by stat(2) since on some systems it reports a zero size for raw disk partitions. -S, --no-sandbox On systems where libseccomp (https://github.com/seccomp/libseccomp ) is available, the -S option disables sandboxing which is enabled by default. This option is needed for file to execute external decompressing programs, i.e. when the -z option is specified and the built-in decompressors are not available. On systems where sandboxing is not available, this option has no effect. -v, --version Print the version of the program and exit. -z, --uncompress Try to look inside compressed files. -Z, --uncompress-noreport Try to look inside compressed files, but report information about the contents only not the compression. -0, --print0 Output a null character ‘\0’ after the end of the filename. Nice to cut(1) the output. This does not affect the separator, which is still printed. If this option is repeated more than once, then file prints just the filename followed by a NUL followed by the description (or ERROR: text) followed by a second NUL for each entry. --help Print a help message and exit.
# file > Determine file type. More information: https://manned.org/file. * Give a description of the type of the specified file. Works fine for files with no file extension: `file {{path/to/file}}` * Look inside a zipped file and determine the file type(s) inside: `file -z {{foo.zip}}` * Allow file to work with special or device files: `file -s {{path/to/file}}` * Don't stop at first file type match; keep going until the end of the file: `file -k {{path/to/file}}` * Determine the MIME encoding type of a file: `file -i {{path/to/file}}`
vi
This utility shall be provided on systems that both support the User Portability Utilities option and define the POSIX2_CHAR_TERM symbol. On other systems it is optional. The vi (visual) utility is a screen-oriented text editor. Only the open and visual modes of the editor are described in POSIX.1‐2008; see the line editor ex for additional editing capabilities used in vi. The user can switch back and forth between vi and ex and execute ex commands from within vi. This reference page uses the term edit buffer to describe the current working text. No specific implementation is implied by this term. All editing changes are performed on the edit buffer, and no changes to it shall affect any file until an editor command writes the file. When using vi, the terminal screen acts as a window into the editing buffer. Changes made to the editing buffer shall be reflected in the screen display; the position of the cursor on the screen shall indicate the position within the editing buffer. Certain terminals do not have all the capabilities necessary to support the complete vi definition. When these commands cannot be supported on such terminals, this condition shall not produce an error message such as ``not an editor command'' or report a syntax error. The implementation may either accept the commands and produce results on the screen that are the result of an unsuccessful attempt to meet the requirements of this volume of POSIX.1‐2017 or report an error describing the terminal-related deficiency. The vi utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines, except that '+' may be recognized as an option delimiter as well as '-'. The following options shall be supported: -c command See the ex command description of the -c option. -r See the ex command description of the -r option. -R See the ex command description of the -R option. -t tagstring See the ex command description of the -t option. -w size See the ex command description of the -w option.
# vi > This command is an alias of `vim`. * View documentation for the original command: `tldr vim`
pwdx
man7.org > Linux > man-pages Linux/UNIX system programming training * * * # pwdx(1) -- Linux manual page NAME | SYNOPSIS | OPTIONS | SEE ALSO | STANDARDS | AUTHOR | REPORTING BUGS | COLOPHON PWDX(1) User Commands PWDX(1) ## NAME top pwdx - report current working directory of a process ## SYNOPSIS top pwdx [options] pid [...] ## OPTIONS top -V, --version Output version information and exit. -h, --help Output help screen and exit. ## SEE ALSO top ps(1), pgrep(1) ## STANDARDS top No standards apply, but pwdx looks an awful lot like a SunOS command. ## AUTHOR top Nicholas Miell ⟨nmiell@gmail.com⟩ wrote pwdx in 2004. ## REPORTING BUGS top Please send bug reports to ⟨procps@freelists.org⟩ ## COLOPHON top This page is part of the procps-ng (/proc filesystem utilities) project. Information about the project can be found at ⟨https://gitlab.com/procps-ng/procps⟩. If you have a bug report for this manual page, see ⟨https://gitlab.com/procps-ng/procps/blob/master/Documentation/bugs.md⟩. This page was obtained from the project's upstream Git repository ⟨https://gitlab.com/procps-ng/procps.git⟩ on 2023-06-23. (At that time, the date of the most recent commit that was found in the repository was 2023-06-13.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org procps-ng 2020-06-04 PWDX(1) * * * Pages that refer to this page: pslog(1) * * * * * * HTML rendering created 2023-06-24 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH. * * * -V, --version Output version information and exit. -h, --help Output help screen and exit.
# pwdx > Print working directory of a process. More information: > https://manned.org/pwdx. * Print current working directory of a process: `pwdx {{process_id}}`
locate
This manual page documents the GNU version of locate. For each given pattern, locate searches one or more databases of file names and displays the file names that contain the pattern. Patterns can contain shell-style metacharacters: `*', `?', and `[]'. The metacharacters do not treat `/' or `.' specially. Therefore, a pattern `foo*bar' can match a file name that contains `foo3/bar', and a pattern `*duck*' can match a file name that contains `lake/.ducky'. Patterns that contain metacharacters should be quoted to protect them from expansion by the shell. If a pattern is a plain string — it contains no metacharacters — locate displays all file names in the database that contain that string anywhere. If a pattern does contain metacharacters, locate only displays file names that match the pattern exactly. As a result, patterns that contain metacharacters should usually begin with a `*', and will most often end with one as well. The exceptions are patterns that are intended to explicitly match the beginning or end of a file name. The file name databases contain lists of files that were on the system when the databases were last updated. The system administrator can choose the file name of the default database, the frequency with which the databases are updated, and the directories for which they contain entries; see updatedb(1). If locate's output is going to a terminal, unusual characters in the output are escaped in the same way as for the -print action of the find command. If the output is not going to a terminal, file names are printed exactly as-is. -0, --null Use ASCII NUL as a separator, instead of newline. -A, --all Print only names which match all non-option arguments, not those matching one or more non-option arguments. -b, --basename Results are considered to match if the pattern specified matches the final component of the name of a file as listed in the database. This final component is usually referred to as the `base name'. -c, --count Instead of printing the matched filenames, just print the total number of matches we found, unless --print (-p) is also present. -d path, --database=path Instead of searching the default file name database, search the file name databases in path, which is a colon- separated list of database file names. You can also use the environment variable LOCATE_PATH to set the list of database files to search. The option overrides the environment variable if both are used. Empty elements in the path are taken to be synonyms for the file name of the default database. A database can be supplied on stdin, using `-' as an element of path. If more than one element of path is `-', later instances are ignored (and a warning message is printed). The file name database format changed starting with GNU find and locate version 4.0 to allow machines with different byte orderings to share the databases. This version of locate can automatically recognize and read databases produced for older versions of GNU locate or Unix versions of locate or find. Support for the old locate database format will be discontinued in a future release. -e, --existing Only print out such names that currently exist (instead of such names that existed when the database was created). Note that this may slow down the program a lot, if there are many matches in the database. If you are using this option within a program, please note that it is possible for the file to be deleted after locate has checked that it exists, but before you use it. -E, --non-existing Only print out such names that currently do not exist (instead of such names that existed when the database was created). Note that this may slow down the program a lot, if there are many matches in the database. --help Print a summary of the options to locate and exit. -i, --ignore-case Ignore case distinctions in both the pattern and the file names. -l N, --limit=N Limit the number of matches to N. If a limit is set via this option, the number of results printed for the -c option will never be larger than this number. -L, --follow If testing for the existence of files (with the -e or -E options), consider broken symbolic links to be non- existing. This is the default. --max-database-age D Normally, locate will issue a warning message when it searches a database which is more than 8 days old. This option changes that value to something other than 8. The effect of specifying a negative value is undefined. -m, --mmap Accepted but does nothing, for compatibility with BSD locate. -P, -H, --nofollow If testing for the existence of files (with the -e or -E options), treat broken symbolic links as if they were existing files. The -H form of this option is provided purely for similarity with find; the use of -P is recommended over -H. -p, --print Print search results when they normally would not, because of the presence of --statistics (-S) or --count (-c). -r, --regex The pattern specified on the command line is understood to be a regular expression, as opposed to a glob pattern. The Regular expressions work in the same was as in emacs except for the fact that "." will match a newline. GNU find uses the same regular expressions. Filenames whose full paths match the specified regular expression are printed (or, in the case of the -c option, counted). If you wish to anchor your regular expression at the ends of the full path name, then as is usual with regular expressions, you should use the characters ^ and $ to signify this. --regextype R Use regular expression dialect R. Supported dialects include `findutils-default', `posix-awk', `posix-basic', `posix-egrep', `posix-extended', `posix-minimal-basic', `awk', `ed', `egrep', `emacs', `gnu-awk', `grep' and `sed'. See the Texinfo documentation for a detailed explanation of these dialects. -s, --stdio Accepted but does nothing, for compatibility with BSD locate. -S, --statistics Print various statistics about each locate database and then exit without performing a search, unless non-option arguments are given. For compatibility with BSD, -S is accepted as a synonym for --statistics. However, the output of locate -S is different for the GNU and BSD implementations of locate. --version Print the version number of locate and exit. -w, --wholename Match against the whole name of the file as listed in the database. This is the default.
# locate > Find filenames quickly. More information: https://manned.org/locate. * Look for pattern in the database. Note: the database is recomputed periodically (usually weekly or daily): `locate "{{pattern}}"` * Look for a file by its exact filename (a pattern containing no globbing characters is interpreted as `*pattern*`): `locate */{{filename}}` * Recompute the database. You need to do it if you want to find recently added files: `sudo /usr/libexec/locate.updatedb`
rm
The rm utility shall remove the directory entry specified by each file argument. If either of the files dot or dot-dot are specified as the basename portion of an operand (that is, the final pathname component) or if an operand resolves to the root directory, rm shall write a diagnostic message to standard error and do nothing more with such operands. For each file the following steps shall be taken: 1. If the file does not exist: a. If the -f option is not specified, rm shall write a diagnostic message to standard error. b. Go on to any remaining files. 2. If file is of type directory, the following steps shall be taken: a. If neither the -R option nor the -r option is specified, rm shall write a diagnostic message to standard error, do nothing more with file, and go on to any remaining files. b. If file is an empty directory, rm may skip to step 2d. If the -f option is not specified, and either the permissions of file do not permit writing and the standard input is a terminal or the -i option is specified, rm shall write a prompt to standard error and read a line from the standard input. If the response is not affirmative, rm shall do nothing more with the current file and go on to any remaining files. c. For each entry contained in file, other than dot or dot- dot, the four steps listed here (1 to 4) shall be taken with the entry as if it were a file operand. The rm utility shall not traverse directories by following symbolic links into other parts of the hierarchy, but shall remove the links themselves. d. If the -i option is specified, rm shall write a prompt to standard error and read a line from the standard input. If the response is not affirmative, rm shall do nothing more with the current file, and go on to any remaining files. 3. If file is not of type directory, the -f option is not specified, and either the permissions of file do not permit writing and the standard input is a terminal or the -i option is specified, rm shall write a prompt to the standard error and read a line from the standard input. If the response is not affirmative, rm shall do nothing more with the current file and go on to any remaining files. 4. If the current file is a directory, rm shall perform actions equivalent to the rmdir() function defined in the System Interfaces volume of POSIX.1‐2017 called with a pathname of the current file used as the path argument. If the current file is not a directory, rm shall perform actions equivalent to the unlink() function defined in the System Interfaces volume of POSIX.1‐2017 called with a pathname of the current file used as the path argument. If this fails for any reason, rm shall write a diagnostic message to standard error, do nothing more with the current file, and go on to any remaining files. The rm utility shall be able to descend to arbitrary depths in a file hierarchy, and shall not fail due to path length limitations (unless an operand specified by the user exceeds system limitations). The rm utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following options shall be supported: -f Do not prompt for confirmation. Do not write diagnostic messages or modify the exit status in the case of no file operands, or in the case of operands that do not exist. Any previous occurrences of the -i option shall be ignored. -i Prompt for confirmation as described previously. Any previous occurrences of the -f option shall be ignored. -R Remove file hierarchies. See the DESCRIPTION. -r Equivalent to -R.
# rm > Remove files or directories. See also: `rmdir`. More information: > https://www.gnu.org/software/coreutils/rm. * Remove specific files: `rm {{path/to/file1 path/to/file2 ...}}` * Remove specific files ignoring nonexistent ones: `rm -f {{path/to/file1 path/to/file2 ...}}` * Remove specific files [i]nteractively prompting before each removal: `rm -i {{path/to/file1 path/to/file2 ...}}` * Remove specific files printing info about each removal: `rm -v {{path/to/file1 path/to/file2 ...}}` * Remove specific files and directories [r]ecursively: `rm -r {{path/to/file_or_directory1 path/to/file_or_directory2 ...}}`
ldapsearch
ldapsearch is a shell-accessible interface to the ldap_search_ext(3) library call. ldapsearch opens a connection to an LDAP server, binds, and performs a search using specified parameters. The filter should conform to the string representation for search filters as defined in RFC 4515. If not provided, the default filter, (objectClass=*), is used. If ldapsearch finds one or more entries, the attributes specified by attrs are returned. If * is listed, all user attributes are returned. If + is listed, all operational attributes are returned. If no attrs are listed, all user attributes are returned. If only 1.1 is listed, no attributes will be returned. The search results are displayed using an extended version of LDIF. Option -L controls the format of the output. -V[V] Print version info. If -VV is given, exit after providing version info. Otherwise proceed with the specified search -d debuglevel Set the LDAP debugging level to debuglevel. ldapsearch must be compiled with LDAP_DEBUG defined for this option to have any effect. -n Show what would be done, but don't actually perform the search. Useful for debugging in conjunction with -v. -v Run in verbose mode, with many diagnostics written to standard output. -c Continuous operation mode. Errors are reported, but ldapsearch will continue with searches. The default is to exit after reporting an error. Only useful in conjunction with -f. -u Include the User Friendly Name form of the Distinguished Name (DN) in the output. -t[t] A single -t writes retrieved non-printable values to a set of temporary files. This is useful for dealing with values containing non-character data such as jpegPhoto or audio. A second -t writes all retrieved values to files. -T path Write temporary files to directory specified by path (default: system default tmp directory). The environment variables TMPDIR, TMP, or TEMP will override the default path. -F prefix URL prefix for temporary files. Default is file://path where path is the system default tmp directory or the value specified with -T. -A Retrieve attributes only (no values). This is useful when you just want to see if an attribute is present in an entry and are not interested in the specific values. -L Search results are display in LDAP Data Interchange Format detailed in ldif(5). A single -L restricts the output to LDIFv1. A second -L disables comments. A third -L disables printing of the LDIF version. The default is to use an extended version of LDIF. -S attribute Sort the entries returned based on attribute. The default is not to sort entries returned. If attribute is a zero- length string (""), the entries are sorted by the components of their Distinguished Name. See ldap_sort(3) for more details. Note that ldapsearch normally prints out entries as it receives them. The use of the -S option defeats this behavior, causing all entries to be retrieved, then sorted, then printed. -b searchbase Use searchbase as the starting point for the search instead of the default. -s {base|one|sub|children} Specify the scope of the search to be one of base, one, sub, or children to specify a base object, one-level, subtree, or children search. The default is sub. Note: children scope requires LDAPv3 subordinate feature extension. -a {never|always|search|find} Specify how aliases dereferencing is done. Should be one of never, always, search, or find to specify that aliases are never dereferenced, always dereferenced, dereferenced when searching, or dereferenced only when locating the base object for the search. The default is to never dereference aliases. -l timelimit wait at most timelimit seconds for a search to complete. A timelimit of 0 (zero) or none means no limit. A timelimit of max means the maximum integer allowable by the protocol. A server may impose a maximal timelimit which only the root user may override. -z sizelimit retrieve at most sizelimit entries for a search. A sizelimit of 0 (zero) or none means no limit. A sizelimit of max means the maximum integer allowable by the protocol. A server may impose a maximal sizelimit which only the root user may override. -f file Read a series of lines from file, performing one LDAP search for each line. In this case, the filter given on the command line is treated as a pattern where the first and only occurrence of %s is replaced with a line from file. Any other occurrence of the the % character in the pattern will be regarded as an error. Where it is desired that the search filter include a % character, the character should be encoded as \25 (see RFC 4515). If file is a single - character, then the lines are read from standard input. ldapsearch will exit when the first non- successful search result is returned, unless -c is used. -M[M] Enable manage DSA IT control. -MM makes control critical. -x Use simple authentication instead of SASL. -D binddn Use the Distinguished Name binddn to bind to the LDAP directory. For SASL binds, the server is expected to ignore this value. -W Prompt for simple authentication. This is used instead of specifying the password on the command line. -w passwd Use passwd as the password for simple authentication. -y passwdfile Use complete contents of passwdfile as the password for simple authentication. -H ldapuri Specify URI(s) referring to the ldap server(s); a list of URI, separated by whitespace or commas is expected; only the protocol/host/port fields are allowed. As an exception, if no host/port is specified, but a DN is, the DN is used to look up the corresponding host(s) using the DNS SRV records, according to RFC 2782. The DN must be a non-empty sequence of AVAs whose attribute type is "dc" (domain component), and must be escaped according to RFC 2396. -P {2|3} Specify the LDAP protocol version to use. -e [!]ext[=extparam] -E [!]ext[=extparam] Specify general extensions with -e and search extensions with -E. ´!´ indicates criticality. General extensions: [!]assert=<filter> (an RFC 4515 Filter) !authzid=<authzid> ("dn:<dn>" or "u:<user>") [!]bauthzid (RFC 3829 authzid control) [!]chaining[=<resolve>[/<cont>]] [!]manageDSAit [!]noop ppolicy [!]postread[=<attrs>] (a comma-separated attribute list) [!]preread[=<attrs>] (a comma-separated attribute list) [!]relax sessiontracking[=<username>] abandon,cancel,ignore (SIGINT sends abandon/cancel, or ignores response; if critical, doesn't wait for SIGINT. not really controls) Search extensions: !dontUseCopy [!]domainScope (domain scope) [!]mv=<filter> (matched values filter) [!]pr=<size>[/prompt|noprompt] (paged results/prompt) [!]sss=[-]<attr[:OID]>[/[-]<attr[:OID]>...] (server side sorting) [!]subentries[=true|false] (subentries) [!]sync=ro[/<cookie>] (LDAP Sync refreshOnly) rp[/<cookie>][/<slimit>] (LDAP Sync refreshAndPersist) [!]vlv=<before>/<after>(/<offset>/<count>|:<value>) (virtual list view) [!]deref=derefAttr:attr[,attr[...]][;derefAttr:attr[,attr[...]]] [!]<oid>[=:<value>|::<b64value>] -o opt[=optparam] Specify any ldap.conf(5) option or one of the following: nettimeout=<timeout> (in seconds, or "none" or "max") ldif_wrap=<width> (in columns, or "no" for no wrapping) -O security-properties Specify SASL security properties. -I Enable SASL Interactive mode. Always prompt. Default is to prompt only as needed. -Q Enable SASL Quiet mode. Never prompt. -N Do not use reverse DNS to canonicalize SASL host name. -U authcid Specify the authentication ID for SASL bind. The form of the ID depends on the actual SASL mechanism used. -R realm Specify the realm of authentication ID for SASL bind. The form of the realm depends on the actual SASL mechanism used. -X authzid Specify the requested authorization ID for SASL bind. authzid must be one of the following formats: dn:<distinguished name> or u:<username> -Y mech Specify the SASL mechanism to be used for authentication. If it's not specified, the program will choose the best mechanism the server knows. -Z[Z] Issue StartTLS (Transport Layer Security) extended operation. If you use -ZZ, the command will require the operation to be successful.
# ldapsearch > Query an LDAP directory. More information: https://docs.ldap.com/ldap- > sdk/docs/tool-usages/ldapsearch.html. * Query an LDAP server for all items that are a member of the given group and return the object's displayName value: `ldapsearch -D '{{admin_DN}}' -w '{{password}}' -h {{ldap_host}} -b {{base_ou}} '{{memberOf=group1}}' displayName` * Query an LDAP server with a no-newline password file for all items that are a member of the given group and return the object's displayName value: `ldapsearch -D '{{admin_DN}}' -y '{{password_file}}' -h {{ldap_host}} -b {{base_ou}} '{{memberOf=group1}}' displayName` * Return 5 items that match the given filter: `ldapsearch -D '{{admin_DN}}' -w '{{password}}' -h {{ldap_host}} -b {{base_ou}} '{{memberOf=group1}}' -z 5 displayName` * Wait up to 7 seconds for a response: `ldapsearch -D '{{admin_DN}}' -w '{{password}}' -h {{ldap_host}} -b {{base_ou}} '{{memberOf=group1}}' -l 7 displayName` * Invert the filter: `ldapsearch -D '{{admin_DN}}' -w '{{password}}' -h {{ldap_host}} -b {{base_ou}} '(!(memberOf={{group1}}))' displayName` * Return all items that are part of multiple groups, returning the display name for each item: `ldapsearch -D '{{admin_DN}}' -w '{{password}}' -h {{ldap_host}} '(&({{memberOf=group1}})({{memberOf=group2}})({{memberOf=group3}}))' "displayName"` * Return all items that are members of at least 1 of the specified groups: `ldapsearch -D '{{admin_DN}}' -w '{{password}}' -h {{ldap_host}} '(|({{memberOf=group1}})({{memberOf=group1}})({{memberOf=group3}}))' displayName` * Combine multiple boolean logic filters: `ldapsearch -D '{{admin_DN}}' -w '{{password}}' -h {{ldap_host}} '(&({{memberOf=group1}})({{memberOf=group2}})(!({{memberOf=group3}})))' displayName`
git-clean
Cleans the working tree by recursively removing files that are not under version control, starting from the current directory. Normally, only files unknown to Git are removed, but if the -x option is specified, ignored files are also removed. This can, for example, be useful to remove all build products. If any optional <pathspec>... arguments are given, only those paths that match the pathspec are affected. -d Normally, when no <pathspec> is specified, git clean will not recurse into untracked directories to avoid removing too much. Specify -d to have it recurse into such directories as well. If a <pathspec> is specified, -d is irrelevant; all untracked files matching the specified paths (with exceptions for nested git directories mentioned under --force) will be removed. -f, --force If the Git configuration variable clean.requireForce is not set to false, git clean will refuse to delete files or directories unless given -f or -i. Git will refuse to modify untracked nested git repositories (directories with a .git subdirectory) unless a second -f is given. -i, --interactive Show what would be done and clean files interactively. See “Interactive mode” for details. -n, --dry-run Don’t actually remove anything, just show what would be done. -q, --quiet Be quiet, only report errors, but not the files that are successfully removed. -e <pattern>, --exclude=<pattern> Use the given exclude pattern in addition to the standard ignore rules (see gitignore(5)). -x Don’t use the standard ignore rules (see gitignore(5)), but still use the ignore rules given with -e options from the command line. This allows removing all untracked files, including build products. This can be used (possibly in conjunction with git restore or git reset) to create a pristine working directory to test a clean build. -X Remove only files ignored by Git. This may be useful to rebuild everything from scratch, but keep manually created files.
# git clean > Remove untracked files from the working tree. More information: https://git- > scm.com/docs/git-clean. * Delete files that are not tracked by Git: `git clean` * Interactively delete files that are not tracked by Git: `git clean -i` * Show what files would be deleted without actually deleting them: `git clean --dry-run` * Forcefully delete files that are not tracked by Git: `git clean -f` * Forcefully delete directories that are not tracked by Git: `git clean -fd` * Delete untracked files, including ignored files in `.gitignore` and `.git/info/exclude`: `git clean -x`
git-bugreport
Captures information about the user’s machine, Git client, and repository state, as well as a form requesting information about the behavior the user observed, into a single text file which the user can then share, for example to the Git mailing list, in order to report an observed bug. The following information is requested from the user: • Reproduction steps • Expected behavior • Actual behavior The following information is captured automatically: • git version --build-options • uname sysname, release, version, and machine strings • Compiler-specific info string • A list of enabled hooks • $SHELL Additional information may be gathered into a separate zip archive using the --diagnose option, and can be attached alongside the bugreport document to provide additional context to readers. This tool is invoked via the typical Git setup process, which means that in some cases, it might not be able to launch - for example, if a relevant config file is unreadable. In this kind of scenario, it may be helpful to manually gather the kind of information listed above when manually asking for help. -o <path>, --output-directory <path> Place the resulting bug report file in <path> instead of the current directory. -s <format>, --suffix <format> Specify an alternate suffix for the bugreport name, to create a file named git-bugreport-<formatted suffix>. This should take the form of a strftime(3) format string; the current local time will be used. --no-diagnose, --diagnose[=<mode>] Create a zip archive of supplemental information about the user’s machine, Git client, and repository state. The archive is written to the same output directory as the bug report and is named git-diagnostics-<formatted suffix>. Without mode specified, the diagnostic archive will contain the default set of statistics reported by git diagnose. An optional mode value may be specified to change which information is included in the archive. See git-diagnose(1) for the list of valid values for mode and details about their usage.
# git bugreport > Captures debug information from the system and user, generating a text file > to aid in the reporting of a bug in Git. More information: https://git- > scm.com/docs/git-bugreport. * Create a new bug report file in the current directory: `git bugreport` * Create a new bug report file in the specified directory, creating it if it does not exist: `git bugreport --output-directory {{path/to/directory}}` * Create a new bug report file with the specified filename suffix in `strftime` format: `git bugreport --suffix {{%m%d%y}}`
keyctl
This program is used to control the key management facility in various ways using a variety of subcommands.
# keyctl > Manipulate the Linux kernel keyring. More information: > https://manned.org/keyctl. * List keys in a specific keyring: `keyctl list {{target_keyring}}` * List current keys in the user default session: `keyctl list {{@us}}` * Store a key in a specific keyring: `keyctl add {{type_keyring}} {{key_name}} {{key_value}} {{target_keyring}}` * Store a key with its value from `stdin`: `echo -n {{key_value}} | keyctl padd {{type_keyring}} {{key_name}} {{target_keyring}}` * Put a timeout on a key: `keyctl timeout {{key_name}} {{timeout_in_seconds}}` * Read a key and format it as a hex-dump if not printable: `keyctl read {{key_name}}` * Read a key and format as-is: `keyctl pipe {{key_name}}` * Revoke a key and prevent any further action on it: `keyctl revoke {{key_name}}`
dpkg-query
dpkg-query is a tool to show information about packages listed in the dpkg database. --admindir=dir Change the location of the dpkg database. The default location is /usr/local/var/lib/dpkg. --root=directory Set the root directory to directory, which sets the administrative directory to «directory/usr/local/var/lib/dpkg» (since dpkg 1.21.0). --load-avail Also load the available file when using the --show and --list commands, which now default to only querying the status file (since dpkg 1.16.2). --no-pager Disables the use of any pager when showing information (since dpkg 1.19.2). -f, --showformat=format This option is used to specify the format of the output --show will produce (short option since dpkg 1.13.1). The format is a string that will be output for each package listed. In the format string, “\” introduces escapes: \n newline \r carriage return \t tab “\” before any other character suppresses any special meaning of the following character, which is useful for “\” and “$”. Package information can be included by inserting variable references to package fields using the syntax “${field[;width]}”. Fields are printed right-aligned unless the width is negative in which case left alignment will be used. The following fields are recognized but they are not necessarily available in the status file (only internal fields or fields stored in the binary package end up in it): Architecture Bugs Conffiles (internal) Config-Version (internal) Conflicts Breaks Depends Description Enhances Protected Essential Filename (internal, front-end related) Homepage Installed-Size MD5sum (internal, front-end related) MSDOS-Filename (internal, front-end related) Maintainer Origin Package Pre-Depends Priority Provides Recommends Replaces Revision (obsolete) Section Size (internal, front-end related) Source Status (internal) Suggests Tag (usually not in .deb but in repository Packages files) Triggers-Awaited (internal) Triggers-Pending (internal) Version The following are virtual fields, generated by dpkg-query from values from other fields (note that these do not use valid names for fields in control files): binary:Package It contains the binary package name with a possible architecture qualifier like “libc6:amd64” (since dpkg 1.16.2). An architecture qualifier will be present to make the package name unambiguous, for packages with a Multi-Arch field with the value same or with a foreign architecture, which is an architecture that is neither the native one nor all. binary:Synopsis It contains the package short description (since dpkg 1.19.1). binary:Summary This is an alias for binary:Synopsis (since dpkg 1.16.2). db:Status-Abbrev It contains the abbreviated package status (as three characters), such as “ii ” or “iHR” (since dpkg 1.16.2). See the --list command description for more details. db:Status-Want It contains the package wanted status, part of the Status field (since dpkg 1.17.11). db:Status-Status It contains the package status word, part of the Status field (since dpkg 1.17.11). db:Status-Eflag It contains the package status error flag, part of the Status field (since dpkg 1.17.11). db-fsys:Files It contains the list of the package filesystem entries separated by newlines (since dpkg 1.19.3). db-fsys:Last-Modified It contains the timestamp in seconds of the last time the package filesystem entries were modified (since dpkg 1.19.3). source:Package It contains the source package name for this binary package (since dpkg 1.16.2). source:Version It contains the source package version for this binary package (since dpkg 1.16.2) source:Upstream-Version It contains the source package upstream version for this binary package (since dpkg 1.18.16) The default format string is “${binary:Package}\t${Version}\n”. Actually, all other fields found in the status file (i.e. user defined fields) can be requested, too. They will be printed as-is, though, no conversion nor error checking is done on them. To get the name of the dpkg maintainer and the installed version, you could run: dpkg-query -f='${binary:Package} ${Version}\t${Maintainer}\n' \ -W dpkg
# dpkg-query > A tool that shows information about installed packages. More information: > https://manpages.debian.org/latest/dpkg/dpkg-query.1.html. * List all installed packages: `dpkg-query --list` * List installed packages matching a pattern: `dpkg-query --list '{{libc6*}}'` * List all files installed by a package: `dpkg-query --listfiles {{libc6}}` * Show information about a package: `dpkg-query --status {{libc6}}` * Search for packages that own files matching a pattern: `dpkg-query --search {{/etc/ld.so.conf.d}}`
git-blame
Annotates each line in the given file with information from the revision which last modified the line. Optionally, start annotating from the given revision. When specified one or more times, -L restricts annotation to the requested lines. The origin of lines is automatically followed across whole-file renames (currently there is no option to turn the rename-following off). To follow lines moved from one file to another, or to follow lines that were copied and pasted from another file, etc., see the -C and -M options. The report does not tell you anything about lines which have been deleted or replaced; you need to use a tool such as git diff or the "pickaxe" interface briefly mentioned in the following paragraph. Apart from supporting file annotation, Git also supports searching the development history for when a code snippet occurred in a change. This makes it possible to track when a code snippet was added to a file, moved or copied between files, and eventually deleted or replaced. It works by searching for a text string in the diff. A small example of the pickaxe interface that searches for blame_usage: $ git log --pretty=oneline -S'blame_usage' 5040f17eba15504bad66b14a645bddd9b015ebb7 blame -S <ancestry-file> ea4c7f9bf69e781dd0cd88d2bccb2bf5cc15c9a7 git-blame: Make the output -b Show blank SHA-1 for boundary commits. This can also be controlled via the blame.blankBoundary config option. --root Do not treat root commits as boundaries. This can also be controlled via the blame.showRoot config option. --show-stats Include additional statistics at the end of blame output. -L <start>,<end>, -L :<funcname> Annotate only the line range given by <start>,<end>, or by the function name regex <funcname>. May be specified multiple times. Overlapping ranges are allowed. <start> and <end> are optional. -L <start> or -L <start>, spans from <start> to end of file. -L ,<end> spans from start of file to <end>. <start> and <end> can take one of these forms: • number If <start> or <end> is a number, it specifies an absolute line number (lines count from 1). • /regex/ This form will use the first line matching the given POSIX regex. If <start> is a regex, it will search from the end of the previous -L range, if any, otherwise from the start of file. If <start> is ^/regex/, it will search from the start of file. If <end> is a regex, it will search starting at the line given by <start>. • +offset or -offset This is only valid for <end> and will specify a number of lines before or after the line given by <start>. If :<funcname> is given in place of <start> and <end>, it is a regular expression that denotes the range from the first funcname line that matches <funcname>, up to the next funcname line. :<funcname> searches from the end of the previous -L range, if any, otherwise from the start of file. ^:<funcname> searches from the start of file. The function names are determined in the same way as git diff works out patch hunk headers (see Defining a custom hunk-header in gitattributes(5)). -l Show long rev (Default: off). -t Show raw timestamp (Default: off). -S <revs-file> Use revisions from revs-file instead of calling git-rev-list(1). --reverse <rev>..<rev> Walk history forward instead of backward. Instead of showing the revision in which a line appeared, this shows the last revision in which a line has existed. This requires a range of revision like START..END where the path to blame exists in START. git blame --reverse START is taken as git blame --reverse START..HEAD for convenience. --first-parent Follow only the first parent commit upon seeing a merge commit. This option can be used to determine when a line was introduced to a particular integration branch, rather than when it was introduced to the history overall. -p, --porcelain Show in a format designed for machine consumption. --line-porcelain Show the porcelain format, but output commit information for each line, not just the first time a commit is referenced. Implies --porcelain. --incremental Show the result incrementally in a format designed for machine consumption. --encoding=<encoding> Specifies the encoding used to output author names and commit summaries. Setting it to none makes blame output unconverted data. For more information see the discussion about encoding in the git-log(1) manual page. --contents <file> Annotate using the contents from the named file, starting from <rev> if it is specified, and HEAD otherwise. You may specify - to make the command read from the standard input for the file contents. --date <format> Specifies the format used to output dates. If --date is not provided, the value of the blame.date config variable is used. If the blame.date config variable is also not set, the iso format is used. For supported values, see the discussion of the --date option at git-log(1). --[no-]progress Progress status is reported on the standard error stream by default when it is attached to a terminal. This flag enables progress reporting even if not attached to a terminal. Can’t use --progress together with --porcelain or --incremental. -M[<num>] Detect moved or copied lines within a file. When a commit moves or copies a block of lines (e.g. the original file has A and then B, and the commit changes it to B and then A), the traditional blame algorithm notices only half of the movement and typically blames the lines that were moved up (i.e. B) to the parent and assigns blame to the lines that were moved down (i.e. A) to the child commit. With this option, both groups of lines are blamed on the parent by running extra passes of inspection. <num> is optional but it is the lower bound on the number of alphanumeric characters that Git must detect as moving/copying within a file for it to associate those lines with the parent commit. The default value is 20. -C[<num>] In addition to -M, detect lines moved or copied from other files that were modified in the same commit. This is useful when you reorganize your program and move code around across files. When this option is given twice, the command additionally looks for copies from other files in the commit that creates the file. When this option is given three times, the command additionally looks for copies from other files in any commit. <num> is optional but it is the lower bound on the number of alphanumeric characters that Git must detect as moving/copying between files for it to associate those lines with the parent commit. And the default value is 40. If there are more than one -C options given, the <num> argument of the last -C will take effect. --ignore-rev <rev> Ignore changes made by the revision when assigning blame, as if the change never happened. Lines that were changed or added by an ignored commit will be blamed on the previous commit that changed that line or nearby lines. This option may be specified multiple times to ignore more than one revision. If the blame.markIgnoredLines config option is set, then lines that were changed by an ignored commit and attributed to another commit will be marked with a ? in the blame output. If the blame.markUnblamableLines config option is set, then those lines touched by an ignored commit that we could not attribute to another revision are marked with a *. --ignore-revs-file <file> Ignore revisions listed in file, which must be in the same format as an fsck.skipList. This option may be repeated, and these files will be processed after any files specified with the blame.ignoreRevsFile config option. An empty file name, "", will clear the list of revs from previously processed files. --color-lines Color line annotations in the default format differently if they come from the same commit as the preceding line. This makes it easier to distinguish code blocks introduced by different commits. The color defaults to cyan and can be adjusted using the color.blame.repeatedLines config option. --color-by-age Color line annotations depending on the age of the line in the default format. The color.blame.highlightRecent config option controls what color is used for each range of age. -h Show help message. -c Use the same output mode as git-annotate(1) (Default: off). --score-debug Include debugging information related to the movement of lines between files (see -C) and lines moved within a file (see -M). The first number listed is the score. This is the number of alphanumeric characters detected as having been moved between or within files. This must be above a certain threshold for git blame to consider those lines of code to have been moved. -f, --show-name Show the filename in the original commit. By default the filename is shown if there is any line that came from a file with a different name, due to rename detection. -n, --show-number Show the line number in the original commit (Default: off). -s Suppress the author name and timestamp from the output. -e, --show-email Show the author email instead of author name (Default: off). This can also be controlled via the blame.showEmail config option. -w Ignore whitespace when comparing the parent’s version and the child’s to find where the lines came from. --abbrev=<n> Instead of using the default 7+1 hexadecimal digits as the abbreviated object name, use <m>+1 digits, where <m> is at least <n> but ensures the commit object names are unique. Note that 1 column is used for a caret to mark the boundary commit.
# git blame > Show commit hash and last author on each line of a file. More information: > https://git-scm.com/docs/git-blame. * Print file with author name and commit hash on each line: `git blame {{path/to/file}}` * Print file with author email and commit hash on each line: `git blame -e {{path/to/file}}` * Print file with author name and commit hash on each line at a specific commit: `git blame {{commit}} {{path/to/file}}` * Print file with author name and commit hash on each line before a specific commit: `git blame {{commit}}~ {{path/to/file}}`
login
The login program is used to establish a new session with the system. It is normally invoked automatically by responding to the login: prompt on the user's terminal. login may be special to the shell and may not be invoked as a sub-process. When called from a shell, login should be executed as exec login which will cause the user to exit from the current shell (and thus will prevent the new logged in user to return to the session of the caller). Attempting to execute login from any shell but the login shell will produce an error message. The user is then prompted for a password, where appropriate. Echoing is disabled to prevent revealing the password. Only a small number of password failures are permitted before login exits and the communications link is severed. If password aging has been enabled for your account, you may be prompted for a new password before proceeding. You will be forced to provide your old password and the new password before continuing. Please refer to passwd(1) for more information. Your user and group ID will be set according to their values in the /etc/passwd file. The value for $HOME, $SHELL, $PATH, $LOGNAME, and $MAIL are set according to the appropriate fields in the password entry. Ulimit, umask and nice values may also be set according to entries in the GECOS field. On some installations, the environmental variable $TERM will be initialized to the terminal type on your tty line, as specified in /etc/ttytype. An initialization script for your command interpreter may also be executed. Please see the appropriate manual section for more information on this function. A subsystem login is indicated by the presence of a "*" as the first character of the login shell. The given home directory will be used as the root of a new file system which the user is actually logged into. The login program is NOT responsible for removing users from the utmp file. It is the responsibility of getty(8) and init(8) to clean up apparent ownership of a terminal session. If you use login from the shell prompt without exec, the user you use will continue to appear to be logged in even after you log out of the "subsession". -f Do not perform authentication, user is preauthenticated. Note: In that case, username is mandatory. -h Name of the remote host for this login. -p Preserve environment. -r Perform autologin protocol for rlogin. The -r, -h and -f options are only used when login is invoked by root.
# login > Initiates a session for a user. More information: https://manned.org/login. * Log in as a user: `login {{user}}` * Log in as user without authentication if user is preauthenticated: `login -f {{user}}` * Log in as user and preserve environment: `login -p {{user}}` * Log in as a user on a remote host: `login -h {{host}} {{user}}`
git-show-index
Read the .idx file for a Git packfile (created with git-pack-objects(1) or git-index-pack(1)) from the standard input, and dump its contents. The output consists of one object per line, with each line containing two or three space-separated columns: • the first column is the offset in bytes of the object within the corresponding packfile • the second column is the object id of the object • if the index version is 2 or higher, the third column contains the CRC32 of the object data The objects are output in the order in which they are found in the index file, which should be (in a correctly constructed file) sorted by object id. Note that you can get more information on a packfile by calling git-verify-pack(1). However, as this command considers only the index file itself, it’s both faster and more flexible. --object-format=<hash-algorithm> Specify the given object format (hash algorithm) for the index file. The valid values are sha1 and (if enabled) sha256. The default is the algorithm for the current repository (set by extensions.objectFormat), or sha1 if no value is set or outside a repository.. THIS OPTION IS EXPERIMENTAL! SHA-256 support is experimental and still in an early stage. A SHA-256 repository will in general not be able to share work with "regular" SHA-1 repositories. It should be assumed that, e.g., Git internal file formats in relation to SHA-256 repositories may change in backwards-incompatible ways. Only use --object-format=sha256 for testing purposes.
# git show-index > Show the packed archive index of a Git repository. More information: > https://git-scm.com/docs/git-show-index. * Read an IDX file for a Git packfile and dump its contents to `stdout`: `git show-index {{path/to/file.idx}}` * Specify the hash algorithm for the index file (experimental): `git show-index --object-format={{sha1|sha256}} {{path/to/file}}`
crontab
Crontab is the program used to install a crontab table file, remove or list the existing tables used to serve the cron(8) daemon. Each user can have their own crontab, and though these are files in /var/spool/, they are not intended to be edited directly. For SELinux in MLS mode, you can define more crontabs for each range. For more information, see selinux(8). In this version of Cron it is possible to use a network-mounted shared /var/spool/cron across a cluster of hosts and specify that only one of the hosts should run the crontab jobs in the particular directory at any one time. You may also use crontab from any of these hosts to edit the same shared set of crontab files, and to set and query which host should run the crontab jobs. Scheduling cron jobs with crontab can be allowed or disallowed for different users. For this purpose, use the cron.allow and cron.deny files. If the cron.allow file exists, a user must be listed in it to be allowed to use crontab. If the cron.allow file does not exist but the cron.deny file does exist, then a user must not be listed in the cron.deny file in order to use crontab. If neither of these files exist, then only the super user is allowed to use crontab. Another way to restrict the scheduling of cron jobs beyond crontab is to use PAM authentication in /etc/security/access.conf to set up users, which are allowed or disallowed to use crontab or modify system cron jobs in the /etc/cron.d/ directory. The temporary directory can be set in an environment variable. If it is not set by the user, the /tmp directory is used. When listing a crontab on a terminal the output will be colorized unless an environment variable NO_COLOR is set. On edition or deletion of the crontab, a backup of the last crontab will be saved to $XDG_CACHE_HOME/crontab/crontab.bak or $XDG_CACHE_HOME/crontab/crontab.<user>.bak if -u is used. If the XDG_CACHE_HOME environment variable is not set, $HOME/.cache will be used instead. -u Specifies the name of the user whose crontab is to be modified. If this option is not used, crontab examines "your" crontab, i.e., the crontab of the person executing the command. If no crontab exists for a particular user, it is created for them the first time the crontab -u command is used under their username. -T Test the crontab file syntax without installing it. Once an issue is found, the validation is interrupted, so this will not return all the existing issues at the same execution. -l Displays the current crontab on standard output. -r Removes the current crontab. -e Edits the current crontab using the editor specified by the VISUAL or EDITOR environment variables. After you exit from the editor, the modified crontab will be installed automatically. -i This option modifies the -r option to prompt the user for a 'y/Y' response before actually removing the crontab. -s Appends the current SELinux security context string as an MLS_LEVEL setting to the crontab file before editing / replacement occurs - see the documentation of MLS_LEVEL in crontab(5). -n This option is relevant only if cron(8) was started with the -c option, to enable clustering support. It is used to set the host in the cluster which should run the jobs specified in the crontab files in the /var/spool/cron directory. If a hostname is supplied, the host whose hostname returned by gethostname(2) matches the supplied hostname, will be selected to run the selected cron jobs subsequently. If there is no host in the cluster matching the supplied hostname, or you explicitly specify an empty hostname, then the selected jobs will not be run at all. If the hostname is omitted, the name of the local host returned by gethostname(2) is used. Using this option has no effect on the /etc/crontab file and the files in the /etc/cron.d directory, which are always run, and considered host-specific. For more information on clustering support, see cron(8). -c This option is only relevant if cron(8) was started with the -c option, to enable clustering support. It is used to query which host in the cluster is currently set to run the jobs specified in the crontab files in the directory /var/spool/cron , as set using the -n option. -V Print version and exit.
# crontab > Schedule cron jobs to run on a time interval for the current user. More > information: https://crontab.guru/. * Edit the crontab file for the current user: `crontab -e` * Edit the crontab file for a specific user: `sudo crontab -e -u {{user}}` * Replace the current crontab with the contents of the given file: `crontab {{path/to/file}}` * View a list of existing cron jobs for current user: `crontab -l` * Remove all cron jobs for the current user: `crontab -r` * Sample job which runs at 10:00 every day (* means any value): `0 10 * * * {{command_to_execute}}` * Sample crontab entry, which runs a command every 10 minutes: `*/10 * * * * {{command_to_execute}}` * Sample crontab entry, which runs a certain script at 02:30 every Friday: `30 2 * * Fri {{/absolute/path/to/script.sh}}`
install
This install program copies files (often just compiled) into destination locations you choose. If you want to download and install a ready-to-use package on a GNU/Linux system, you should instead be using a package manager like yum(1) or apt-get(1). In the first three forms, copy SOURCE to DEST or multiple SOURCE(s) to the existing DIRECTORY, while setting permission modes and owner/group. In the 4th form, create all components of the given DIRECTORY(ies). Mandatory arguments to long options are mandatory for short options too. --backup[=CONTROL] make a backup of each existing destination file -b like --backup but does not accept an argument -c (ignored) -C, --compare compare content of source and destination files, and if no change to content, ownership, and permissions, do not modify the destination at all -d, --directory treat all arguments as directory names; create all components of the specified directories -D create all leading components of DEST except the last, or all components of --target-directory, then copy SOURCE to DEST --debug explain how a file is copied. Implies -v -g, --group=GROUP set group ownership, instead of process' current group -m, --mode=MODE set permission mode (as in chmod), instead of rwxr-xr-x -o, --owner=OWNER set ownership (super-user only) -p, --preserve-timestamps apply access/modification times of SOURCE files to corresponding destination files -s, --strip strip symbol tables --strip-program=PROGRAM program used to strip binaries -S, --suffix=SUFFIX override the usual backup suffix -t, --target-directory=DIRECTORY copy all SOURCE arguments into DIRECTORY -T, --no-target-directory treat DEST as a normal file -v, --verbose print the name of each created file or directory --preserve-context preserve SELinux security context -Z set SELinux security context of destination file and each created directory to default type --context[=CTX] like -Z, or if CTX is specified then set the SELinux or SMACK security context to CTX --help display this help and exit --version output version information and exit The backup suffix is '~', unless set with --suffix or SIMPLE_BACKUP_SUFFIX. The version control method may be selected via the --backup option or through the VERSION_CONTROL environment variable. Here are the values: none, off never make backups (even if --backup is given) numbered, t make numbered backups existing, nil numbered if numbered backups exist, simple otherwise simple, never always make simple backups
# install > Copy files and set attributes. Copy files (often executable) to a system > location like `/usr/local/bin`, give them the appropriate > permissions/ownership. More information: > https://www.gnu.org/software/coreutils/install. * Copy files to the destination: `install {{path/to/source_file1 path/to/source_file2 ...}} {{path/to/destination}}` * Copy files to the destination, setting their ownership: `install --owner {{user}} {{path/to/source_file1 path/to/source_file2 ...}} {{path/to/destination}}` * Copy files to the destination, setting their group ownership: `install --group {{user}} {{path/to/source_file1 path/to/source_file2 ...}} {{path/to/destination}}` * Copy files to the destination, setting their `mode`: `install --mode {{+x}} {{path/to/source_file1 path/to/source_file2 ...}} {{path/to/destination}}` * Copy files and apply access/modification times of source to the destination: `install --preserve-timestamps {{path/to/source_file1 path/to/source_file2 ...}} {{path/to/destination}}` * Copy files and create the directories at the destination if they don't exist: `install -D {{path/to/source_file1 path/to/source_file2 ...}} {{path/to/destination}}`
colrm
colrm removes selected columns from a file. Input is taken from standard input. Output is sent to standard output. If called with one parameter the columns of each line will be removed starting with the specified first column. If called with two parameters the columns from the first column to the last column will be removed. Column numbering starts with column 1. -h, --help Display help text and exit. -V, --version Print version and exit.
# colrm > Remove columns from `stdin`. More information: https://manned.org/colrm. * Remove first column of `stdin`: `colrm {{1 1}}` * Remove from 3rd column till the end of each line: `colrm {{3}}` * Remove from the 3rd column till the 5th column of each line: `colrm {{3 5}}`
resolvectl
resolvectl may be used to resolve domain names, IPv4 and IPv6 addresses, DNS resource records and services with the systemd-resolved.service(8) resolver service. By default, the specified list of parameters will be resolved as hostnames, retrieving their IPv4 and IPv6 addresses. If the parameters specified are formatted as IPv4 or IPv6 addresses the reverse operation is done, and a hostname is retrieved for the specified addresses. The program's output contains information about the protocol used for the look-up and on which network interface the data was discovered. It also contains information on whether the information could be authenticated. All data for which local DNSSEC validation succeeds is considered authenticated. Moreover all data originating from local, trusted sources is also reported authenticated, including resolution of the local host name, the "localhost" hostname or all data from /etc/hosts. -4, -6 By default, when resolving a hostname, both IPv4 and IPv6 addresses are acquired. By specifying -4 only IPv4 addresses are requested, by specifying -6 only IPv6 addresses are requested. -i INTERFACE, --interface=INTERFACE Specifies the network interface to execute the query on. This may either be specified as numeric interface index or as network interface string (e.g. "en0"). Note that this option has no effect if system-wide DNS configuration (as configured in /etc/resolv.conf or /etc/systemd/resolved.conf) in place of per-link configuration is used. -p PROTOCOL, --protocol=PROTOCOL Specifies the network protocol for the query. May be one of "dns" (i.e. classic unicast DNS), "llmnr" (Link-Local Multicast Name Resolution[5]), "llmnr-ipv4", "llmnr-ipv6" (LLMNR via the indicated underlying IP protocols), "mdns" (Multicast DNS[6]), "mdns-ipv4", "mdns-ipv6" (MDNS via the indicated underlying IP protocols). By default the lookup is done via all protocols suitable for the lookup. If used, limits the set of protocols that may be used. Use this option multiple times to enable resolving via multiple protocols at the same time. The setting "llmnr" is identical to specifying this switch once with "llmnr-ipv4" and once via "llmnr-ipv6". Note that this option does not force the service to resolve the operation with the specified protocol, as that might require a suitable network interface and configuration. The special value "help" may be used to list known values. -t TYPE, --type=TYPE, -c CLASS, --class=CLASS When used in conjunction with the query command, specifies the DNS resource record type (e.g. A, AAAA, MX, ...) and class (e.g. IN, ANY, ...) to look up. If these options are used a DNS resource record set matching the specified class and type is requested. The class defaults to IN if only a type is specified. The special value "help" may be used to list known values. Without these options resolvectl query provides high-level domain name to address and address to domain name resolution. With these options it provides low-level DNS resource record resolution. The search domain logic is automatically turned off when these options are used, i.e. specified domain names need to be fully qualified domain names. Moreover, IDNA internal domain name translation is turned off as well, i.e. international domain names should be specified in "xn--..." notation, unless look-up in MulticastDNS/LLMNR is desired, in which case UTF-8 characters should be used. --service-address=BOOL Takes a boolean parameter. If true (the default), when doing a service lookup with --service the hostnames contained in the SRV resource records are resolved as well. --service-txt=BOOL Takes a boolean parameter. If true (the default), when doing a DNS-SD service lookup with --service the TXT service metadata record is resolved as well. --cname=BOOL Takes a boolean parameter. If true (the default), DNS CNAME or DNAME redirections are followed. Otherwise, if a CNAME or DNAME record is encountered while resolving, an error is returned. --validate=BOOL Takes a boolean parameter; used in conjunction with query. If true (the default), DNSSEC validation is applied as usual — under the condition that it is enabled for the network and for systemd-resolved.service as a whole. If false, DNSSEC validation is disabled for the specific query, regardless of whether it is enabled for the network or in the service. Note that setting this option to true does not force DNSSEC validation on systems/networks where DNSSEC is turned off. This option is only suitable to turn off such validation where otherwise enabled, not enable validation where otherwise disabled. --synthesize=BOOL Takes a boolean parameter; used in conjunction with query. If true (the default), select domains are resolved on the local system, among them "localhost", "_gateway", "_outbound", "_localdnsstub" and "_localdnsproxy" or entries from /etc/hosts. If false these domains are not resolved locally, and either fail (in case of "localhost", "_gateway" or "_outbound" and suchlike) or go to the network via regular DNS/mDNS/LLMNR lookups (in case of /etc/hosts entries). --cache=BOOL Takes a boolean parameter; used in conjunction with query. If true (the default), lookups use the local DNS resource record cache. If false, lookups are routed to the network instead, regardless if already available in the local cache. --zone=BOOL Takes a boolean parameter; used in conjunction with query. If true (the default), lookups are answered from locally registered LLMNR or mDNS resource records, if defined. If false, locally registered LLMNR/mDNS records are not considered for the lookup request. --trust-anchor=BOOL Takes a boolean parameter; used in conjunction with query. If true (the default), lookups for DS and DNSKEY are answered from the local DNSSEC trust anchors if possible. If false, the local trust store is not considered for the lookup request. --network=BOOL Takes a boolean parameter; used in conjunction with query. If true (the default), lookups are answered via DNS, LLMNR or mDNS network requests if they cannot be synthesized locally, or be answered from the local cache, zone or trust anchors (see above). If false, the request is not answered from the network and will thus fail if none of the indicated sources can answer them. --search=BOOL Takes a boolean parameter. If true (the default), any specified single-label hostnames will be searched in the domains configured in the search domain list, if it is non-empty. Otherwise, the search domain logic is disabled. Note that this option has no effect if --type= is used (see above), in which case the search domain logic is unconditionally turned off. --raw[=payload|packet] Dump the answer as binary data. If there is no argument or if the argument is "payload", the payload of the packet is exported. If the argument is "packet", the whole packet is dumped in wire format, prefixed by length specified as a little-endian 64-bit number. This format allows multiple packets to be dumped and unambiguously parsed. --legend=BOOL Takes a boolean parameter. If true (the default), column headers and meta information about the query response are shown. Otherwise, this output is suppressed. --stale-data=BOOL Takes a boolean parameter; used in conjunction with query. If true (the default), lookups are answered with stale data (expired resource records) if possible. If false, the stale data is not considered for the lookup request. --json=MODE Shows output formatted as JSON. Expects one of "short" (for the shortest possible output without any redundant whitespace or line breaks), "pretty" (for a pretty version of the same, with indentation and line breaks) or "off" (to turn off JSON output, the default). -j Short for --json=auto --no-pager Do not pipe output into a pager. -h, --help Print a short help text and exit. --version Print a short version string and exit.
# resolvectl > Resolve domain names, IPv4 and IPv6 addresses, DNS resource records, and > services. Introspect and reconfigure the DNS resolver. More information: > https://www.freedesktop.org/software/systemd/man/resolvectl.html. * Show DNS settings: `resolvectl status` * Resolve the IPv4 and IPv6 addresses for one or more domains: `resolvectl query {{domain1 domain2 ...}}` * Retrieve the domain of a specified IP address: `resolvectl query {{ip_address}}` * Retrieve an MX record of a domain: `resolvectl --legend={{no}} --type={{MX}} query {{domain}}` * Resolve an SRV record, for example _xmpp-server._tcp gmail.com: `resolvectl service _{{service}}._{{protocol}} {{name}}` * Retrieve the public key from an email address from an OPENPGPKEY DNS record: `resolvectl openpgp {{email}}` * Retrieve a TLS key: `resolvectl tlsa tcp {{domain}}:443`
ssh-keygen
ssh-keygen generates, manages and converts authentication keys for ssh(1). ssh-keygen can create keys for use by SSH protocol version 2. The type of key to be generated is specified with the -t option. If invoked without any arguments, ssh-keygen will generate an RSA key. ssh-keygen is also used to generate groups for use in Diffie- Hellman group exchange (DH-GEX). See the MODULI GENERATION section for details. Finally, ssh-keygen can be used to generate and update Key Revocation Lists, and to test whether given keys have been revoked by one. See the KEY REVOCATION LISTS section for details. Normally each user wishing to use SSH with public key authentication runs this once to create the authentication key in ~/.ssh/id_dsa, ~/.ssh/id_ecdsa, ~/.ssh/id_ecdsa_sk, ~/.ssh/id_ed25519, ~/.ssh/id_ed25519_sk or ~/.ssh/id_rsa. Additionally, the system administrator may use this to generate host keys, as seen in /etc/rc. Normally this program generates the key and asks for a file in which to store the private key. The public key is stored in a file with the same name but “.pub” appended. The program also asks for a passphrase. The passphrase may be empty to indicate no passphrase (host keys must have an empty passphrase), or it may be a string of arbitrary length. A passphrase is similar to a password, except it can be a phrase with a series of words, punctuation, numbers, whitespace, or any string of characters you want. Good passphrases are 10-30 characters long, are not simple sentences or otherwise easily guessable (English prose has only 1-2 bits of entropy per character, and provides very bad passphrases), and contain a mix of upper and lowercase letters, numbers, and non- alphanumeric characters. The passphrase can be changed later by using the -p option. There is no way to recover a lost passphrase. If the passphrase is lost or forgotten, a new key must be generated and the corresponding public key copied to other machines. ssh-keygen will by default write keys in an OpenSSH-specific format. This format is preferred as it offers better protection for keys at rest as well as allowing storage of key comments within the private key file itself. The key comment may be useful to help identify the key. The comment is initialized to “user@host” when the key is created, but can be changed using the -c option. It is still possible for ssh-keygen to write the previously-used PEM format private keys using the -m flag. This may be used when generating new keys, and existing new-format keys may be converted using this option in conjunction with the -p (change passphrase) flag. After a key is generated, ssh-keygen will ask where the keys should be placed to be activated. The options are as follows: -A Generate host keys of all default key types (rsa, ecdsa, and ed25519) if they do not already exist. The host keys are generated with the default key file path, an empty passphrase, default bits for the key type, and default comment. If -f has also been specified, its argument is used as a prefix to the default path for the resulting host key files. This is used by /etc/rc to generate new host keys. -a rounds When saving a private key, this option specifies the number of KDF (key derivation function, currently bcrypt_pbkdf(3)) rounds used. Higher numbers result in slower passphrase verification and increased resistance to brute-force password cracking (should the keys be stolen). The default is 16 rounds. -B Show the bubblebabble digest of specified private or public key file. -b bits Specifies the number of bits in the key to create. For RSA keys, the minimum size is 1024 bits and the default is 3072 bits. Generally, 3072 bits is considered sufficient. DSA keys must be exactly 1024 bits as specified by FIPS 186-2. For ECDSA keys, the -b flag determines the key length by selecting from one of three elliptic curve sizes: 256, 384 or 521 bits. Attempting to use bit lengths other than these three values for ECDSA keys will fail. ECDSA-SK, Ed25519 and Ed25519-SK keys have a fixed length and the -b flag will be ignored. -C comment Provides a new comment. -c Requests changing the comment in the private and public key files. The program will prompt for the file containing the private keys, for the passphrase if the key has one, and for the new comment. -D pkcs11 Download the public keys provided by the PKCS#11 shared library pkcs11. When used in combination with -s, this option indicates that a CA key resides in a PKCS#11 token (see the CERTIFICATES section for details). -E fingerprint_hash Specifies the hash algorithm used when displaying key fingerprints. Valid options are: “md5” and “sha256”. The default is “sha256”. -e This option will read a private or public OpenSSH key file and print to stdout a public key in one of the formats specified by the -m option. The default export format is “RFC4716”. This option allows exporting OpenSSH keys for use by other programs, including several commercial SSH implementations. -F hostname | [hostname]:port Search for the specified hostname (with optional port number) in a known_hosts file, listing any occurrences found. This option is useful to find hashed host names or addresses and may also be used in conjunction with the -H option to print found keys in a hashed format. -f filename Specifies the filename of the key file. -g Use generic DNS format when printing fingerprint resource records using the -r command. -H Hash a known_hosts file. This replaces all hostnames and addresses with hashed representations within the specified file; the original content is moved to a file with a .old suffix. These hashes may be used normally by ssh and sshd, but they do not reveal identifying information should the file's contents be disclosed. This option will not modify existing hashed hostnames and is therefore safe to use on files that mix hashed and non-hashed names. -h When signing a key, create a host certificate instead of a user certificate. See the CERTIFICATES section for details. -I certificate_identity Specify the key identity when signing a public key. See the CERTIFICATES section for details. -i This option will read an unencrypted private (or public) key file in the format specified by the -m option and print an OpenSSH compatible private (or public) key to stdout. This option allows importing keys from other software, including several commercial SSH implementations. The default import format is “RFC4716”. -K Download resident keys from a FIDO authenticator. Public and private key files will be written to the current directory for each downloaded key. If multiple FIDO authenticators are attached, keys will be downloaded from the first touched authenticator. See the FIDO AUTHENTICATOR section for more information. -k Generate a KRL file. In this mode, ssh-keygen will generate a KRL file at the location specified via the -f flag that revokes every key or certificate presented on the command line. Keys/certificates to be revoked may be specified by public key file or using the format described in the KEY REVOCATION LISTS section. -L Prints the contents of one or more certificates. -l Show fingerprint of specified public key file. For RSA and DSA keys ssh-keygen tries to find the matching public key file and prints its fingerprint. If combined with -v, a visual ASCII art representation of the key is supplied with the fingerprint. -M generate Generate candidate Diffie-Hellman Group Exchange (DH-GEX) parameters for eventual use by the ‘diffie-hellman-group-exchange-*’ key exchange methods. The numbers generated by this operation must be further screened before use. See the MODULI GENERATION section for more information. -M screen Screen candidate parameters for Diffie-Hellman Group Exchange. This will accept a list of candidate numbers and test that they are safe (Sophie Germain) primes with acceptable group generators. The results of this operation may be added to the /etc/moduli file. See the MODULI GENERATION section for more information. -m key_format Specify a key format for key generation, the -i (import), -e (export) conversion options, and the -p change passphrase operation. The latter may be used to convert between OpenSSH private key and PEM private key formats. The supported key formats are: “RFC4716” (RFC 4716/SSH2 public or private key), “PKCS8” (PKCS8 public or private key) or “PEM” (PEM public key). By default OpenSSH will write newly-generated private keys in its own format, but when converting public keys for export the default format is “RFC4716”. Setting a format of “PEM” when generating or updating a supported private key type will cause the key to be stored in the legacy PEM private key format. -N new_passphrase Provides the new passphrase. -n principals Specify one or more principals (user or host names) to be included in a certificate when signing a key. Multiple principals may be specified, separated by commas. See the CERTIFICATES section for details. -O option Specify a key/value option. These are specific to the operation that ssh-keygen has been requested to perform. When signing certificates, one of the options listed in the CERTIFICATES section may be specified here. When performing moduli generation or screening, one of the options listed in the MODULI GENERATION section may be specified. When generating FIDO authenticator-backed keys, the options listed in the FIDO AUTHENTICATOR section may be specified. When performing signature-related options using the -Y flag, the following options are accepted: hashalg=algorithm Selects the hash algorithm to use for hashing the message to be signed. Valid algorithms are “sha256” and “sha512.” The default is “sha512.” print-pubkey Print the full public key to standard output after signature verification. verify-time=timestamp Specifies a time to use when validating signatures instead of the current time. The time may be specified as a date or time in the YYYYMMDD[Z] or in YYYYMMDDHHMM[SS][Z] formats. Dates and times will be interpreted in the current system time zone unless suffixed with a Z character, which causes them to be interpreted in the UTC time zone. When generating SSHFP DNS records from public keys using the -r flag, the following options are accepted: hashalg=algorithm Selects a hash algorithm to use when printing SSHFP records using the -D flag. Valid algorithms are “sha1” and “sha256”. The default is to print both. The -O option may be specified multiple times. -P passphrase Provides the (old) passphrase. -p Requests changing the passphrase of a private key file instead of creating a new private key. The program will prompt for the file containing the private key, for the old passphrase, and twice for the new passphrase. -Q Test whether keys have been revoked in a KRL. If the -l option is also specified then the contents of the KRL will be printed. -q Silence ssh-keygen. -R hostname | [hostname]:port Removes all keys belonging to the specified hostname (with optional port number) from a known_hosts file. This option is useful to delete hashed hosts (see the -H option above). -r hostname Print the SSHFP fingerprint resource record named hostname for the specified public key file. -s ca_key Certify (sign) a public key using the specified CA key. See the CERTIFICATES section for details. When generating a KRL, -s specifies a path to a CA public key file used to revoke certificates directly by key ID or serial number. See the KEY REVOCATION LISTS section for details. -t dsa | ecdsa | ecdsa-sk | ed25519 | ed25519-sk | rsa Specifies the type of key to create. The possible values are “dsa”, “ecdsa”, “ecdsa-sk”, “ed25519”, “ed25519-sk”, or “rsa”. This flag may also be used to specify the desired signature type when signing certificates using an RSA CA key. The available RSA signature variants are “ssh-rsa” (SHA1 signatures, not recommended), “rsa-sha2-256”, and “rsa-sha2-512” (the default). -U When used in combination with -s or -Y sign, this option indicates that a CA key resides in a ssh-agent(1). See the CERTIFICATES section for more information. -u Update a KRL. When specified with -k, keys listed via the command line are added to the existing KRL rather than a new KRL being created. -V validity_interval Specify a validity interval when signing a certificate. A validity interval may consist of a single time, indicating that the certificate is valid beginning now and expiring at that time, or may consist of two times separated by a colon to indicate an explicit time interval. The start time may be specified as: • The string “always” to indicate the certificate has no specified start time. • A date or time in the system time zone formatted as YYYYMMDD or YYYYMMDDHHMM[SS]. • A date or time in the UTC time zone as YYYYMMDDZ or YYYYMMDDHHMM[SS]Z. • A relative time before the current system time consisting of a minus sign followed by an interval in the format described in the TIME FORMATS section of sshd_config(5). • A raw seconds since epoch (Jan 1 1970 00:00:00 UTC) as a hexadecimal number beginning with “0x”. The end time may be specified similarly to the start time: • The string “forever” to indicate the certificate has no specified end time. • A date or time in the system time zone formatted as YYYYMMDD or YYYYMMDDHHMM[SS]. • A date or time in the UTC time zone as YYYYMMDDZ or YYYYMMDDHHMM[SS]Z. • A relative time after the current system time consisting of a plus sign followed by an interval in the format described in the TIME FORMATS section of sshd_config(5). • A raw seconds since epoch (Jan 1 1970 00:00:00 UTC) as a hexadecimal number beginning with “0x”. For example: +52w1d Valid from now to 52 weeks and one day from now. -4w:+4w Valid from four weeks ago to four weeks from now. 20100101123000:20110101123000 Valid from 12:30 PM, January 1st, 2010 to 12:30 PM, January 1st, 2011. 20100101123000Z:20110101123000Z Similar, but interpreted in the UTC time zone rather than the system time zone. -1d:20110101 Valid from yesterday to midnight, January 1st, 2011. 0x1:0x2000000000 Valid from roughly early 1970 to May 2033. -1m:forever Valid from one minute ago and never expiring. -v Verbose mode. Causes ssh-keygen to print debugging messages about its progress. This is helpful for debugging moduli generation. Multiple -v options increase the verbosity. The maximum is 3. -w provider Specifies a path to a library that will be used when creating FIDO authenticator-hosted keys, overriding the default of using the internal USB HID support. -Y find-principals Find the principal(s) associated with the public key of a signature, provided using the -s flag in an authorized signers file provided using the -f flag. The format of the allowed signers file is documented in the ALLOWED SIGNERS section below. If one or more matching principals are found, they are returned on standard output. -Y match-principals Find principal matching the principal name provided using the -I flag in the authorized signers file specified using the -f flag. If one or more matching principals are found, they are returned on standard output. -Y check-novalidate Checks that a signature generated using ssh-keygen -Y sign has a valid structure. This does not validate if a signature comes from an authorized signer. When testing a signature, ssh-keygen accepts a message on standard input and a signature namespace using -n. A file containing the corresponding signature must also be supplied using the -s flag. Successful testing of the signature is signalled by ssh-keygen returning a zero exit status. -Y sign Cryptographically sign a file or some data using a SSH key. When signing, ssh-keygen accepts zero or more files to sign on the command-line - if no files are specified then ssh-keygen will sign data presented on standard input. Signatures are written to the path of the input file with “.sig” appended, or to standard output if the message to be signed was read from standard input. The key used for signing is specified using the -f option and may refer to either a private key, or a public key with the private half available via ssh-agent(1). An additional signature namespace, used to prevent signature confusion across different domains of use (e.g. file signing vs email signing) must be provided via the -n flag. Namespaces are arbitrary strings, and may include: “file” for file signing, “email” for email signing. For custom uses, it is recommended to use names following a NAMESPACE@YOUR.DOMAIN pattern to generate unambiguous namespaces. -Y verify Request to verify a signature generated using ssh-keygen -Y sign as described above. When verifying a signature, ssh-keygen accepts a message on standard input and a signature namespace using -n. A file containing the corresponding signature must also be supplied using the -s flag, along with the identity of the signer using -I and a list of allowed signers via the -f flag. The format of the allowed signers file is documented in the ALLOWED SIGNERS section below. A file containing revoked keys can be passed using the -r flag. The revocation file may be a KRL or a one-per-line list of public keys. Successful verification by an authorized signer is signalled by ssh-keygen returning a zero exit status. -y This option will read a private OpenSSH format file and print an OpenSSH public key to stdout. -Z cipher Specifies the cipher to use for encryption when writing an OpenSSH-format private key file. The list of available ciphers may be obtained using "ssh -Q cipher". The default is “aes256-ctr”. -z serial_number Specifies a serial number to be embedded in the certificate to distinguish this certificate from others from the same CA. If the serial_number is prefixed with a ‘+’ character, then the serial number will be incremented for each certificate signed on a single command-line. The default serial number is zero. When generating a KRL, the -z flag is used to specify a KRL version number.
# ssh-keygen > Generate ssh keys used for authentication, password-less logins, and other > things. More information: https://man.openbsd.org/ssh-keygen. * Generate a key interactively: `ssh-keygen` * Generate an ed25519 key with 32 key derivation function rounds and save the key to a specific file: `ssh-keygen -t {{ed25519}} -a {{32}} -f {{~/.ssh/filename}}` * Generate an RSA 4096-bit key with email as a comment: `ssh-keygen -t {{rsa}} -b {{4096}} -C "{{comment|email}}"` * Remove the keys of a host from the known_hosts file (useful when a known host has a new key): `ssh-keygen -R {{remote_host}}` * Retrieve the fingerprint of a key in MD5 Hex: `ssh-keygen -l -E {{md5}} -f {{~/.ssh/filename}}` * Change the password of a key: `ssh-keygen -p -f {{~/.ssh/filename}}` * Change the type of the key format (for example from OPENSSH format to PEM), the file will be rewritten in-place: `ssh-keygen -p -N "" -m {{PEM}} -f {{~/.ssh/OpenSSH_private_key}}` * Retrieve public key from secret key: `ssh-keygen -y -f {{~/.ssh/OpenSSH_private_key}}`
pidstat
The pidstat command is used for monitoring individual tasks currently being managed by the Linux kernel. It writes to standard output activities for every task selected with option -p or for every task managed by the Linux kernel if option -p ALL has been used. Not selecting any tasks is equivalent to specifying -p ALL but only active tasks (tasks with non-zero statistics values) will appear in the report. The pidstat command can also be used for monitoring the child processes of selected tasks. Read about option -T below. The interval parameter specifies the amount of time in seconds between each report. A value of 0 (or no parameters at all) indicates that tasks statistics are to be reported for the time since system startup (boot). The count parameter can be specified in conjunction with the interval parameter if this one is not set to zero. The value of count determines the number of reports generated at interval seconds apart. If the interval parameter is specified without the count parameter, the pidstat command generates reports continuously. You can select information about specific task activities using flags. Not specifying any flags selects only CPU activity. -C comm Display only tasks whose command name includes the string comm. This string can be a regular expression. -d Report I/O statistics (kernels 2.6.20 and later only). The following values may be displayed: UID The real user identification number of the task being monitored. USER The name of the real user owning the task being monitored. PID The identification number of the task being monitored. kB_rd/s Number of kilobytes the task has caused to be read from disk per second. kB_wr/s Number of kilobytes the task has caused, or shall cause to be written to disk per second. kB_ccwr/s Number of kilobytes whose writing to disk has been cancelled by the task. This may occur when the task truncates some dirty pagecache. In this case, some IO which another task has been accounted for will not be happening. iodelay Block I/O delay of the task being monitored, measured in clock ticks. This metric includes the delays spent waiting for sync block I/O completion and for swapin block I/O completion. Command The command name of the task. --dec={ 0 | 1 | 2 } Specify the number of decimal places to use (0 to 2, default value is 2). -e program args Execute program with given arguments args and monitor it with pidstat. pidstat stops when program terminates. -G process_name Display only processes whose command name includes the string process_name. This string can be a regular expression. If option -t is used together with option -G then the threads belonging to that process are also displayed (even if their command name doesn't include the string process_name). -H Display timestamp in seconds since the epoch. -h Display all activities horizontally on a single line, with no average statistics at the end of the report. This is intended to make it easier to be parsed by other programs. --human Print sizes in human readable format (e.g. 1.0k, 1.2M, etc.) The units displayed with this option supersede any other default units (e.g. kilobytes, sectors...) associated with the metrics. -I In an SMP environment, indicate that tasks CPU usage (as displayed by option -u) should be divided by the total number of processors. -l Display the process command name and all its arguments. -p { pid[,...] | SELF | ALL } Select tasks (processes) for which statistics are to be reported. pid is the process identification number. The SELF keyword indicates that statistics are to be reported for the pidstat process itself, whereas the ALL keyword indicates that statistics are to be reported for all the tasks managed by the system. -R Report realtime priority and scheduling policy information. The following values may be displayed: UID The real user identification number of the task being monitored. USER The name of the real user owning the task being monitored. PID The identification number of the task being monitored. prio The realtime priority of the task being monitored. policy The scheduling policy of the task being monitored. Command The command name of the task. -r Report page faults and memory utilization. When reporting statistics for individual tasks, the following values may be displayed: UID The real user identification number of the task being monitored. USER The name of the real user owning the task being monitored. PID The identification number of the task being monitored. minflt/s Total number of minor faults the task has made per second, those which have not required loading a memory page from disk. majflt/s Total number of major faults the task has made per second, those which have required loading a memory page from disk. VSZ Virtual Size: The virtual memory usage of entire task in kilobytes. RSS Resident Set Size: The non-swapped physical memory used by the task in kilobytes. %MEM The tasks's currently used share of available physical memory. Command The command name of the task. When reporting global statistics for tasks and all their children, the following values may be displayed: UID The real user identification number of the task which is being monitored together with its children. USER The name of the real user owning the task which is being monitored together with its children. PID The identification number of the task which is being monitored together with its children. minflt-nr Total number of minor faults made by the task and all its children, and collected during the interval of time. majflt-nr Total number of major faults made by the task and all its children, and collected during the interval of time. Command The command name of the task which is being monitored together with its children. -s Report stack utilization. The following values may be displayed: UID The real user identification number of the task being monitored. USER The name of the real user owning the task being monitored. PID The identification number of the task being monitored. StkSize The amount of memory in kilobytes reserved for the task as stack, but not necessarily used. StkRef The amount of memory in kilobytes used as stack, referenced by the task. Command The command name of the task. -T { TASK | CHILD | ALL } This option specifies what has to be monitored by the pidstat command. The TASK keyword indicates that statistics are to be reported for individual tasks (this is the default option) whereas the CHILD keyword indicates that statistics are to be globally reported for the selected tasks and all their children. The ALL keyword indicates that statistics are to be reported for individual tasks and globally for the selected tasks and their children. Note: Global statistics for tasks and all their children are not available for all options of pidstat. Also these statistics are not necessarily relevant to current time interval: The statistics of a child process are collected only when it finishes or it is killed. -t Also display statistics for threads associated with selected tasks. This option adds the following values to the reports: TGID The identification number of the thread group leader. TID The identification number of the thread being monitored. -U [ username ] Display the real user name of the tasks being monitored instead of the UID. If username is specified, then only tasks belonging to the specified user are displayed. -u Report CPU utilization. When reporting statistics for individual tasks, the following values may be displayed: UID The real user identification number of the task being monitored. USER The name of the real user owning the task being monitored. PID The identification number of the task being monitored. %usr Percentage of CPU used by the task while executing at the user level (application), with or without nice priority. Note that this field does NOT include time spent running a virtual processor. %system Percentage of CPU used by the task while executing at the system level (kernel). %guest Percentage of CPU spent by the task in virtual machine (running a virtual processor). %wait Percentage of CPU spent by the task while waiting to run. %CPU Total percentage of CPU time used by the task. In an SMP environment, the task's CPU usage will be divided by the total number of CPU's if option -I has been entered on the command line. CPU Processor number to which the task is attached. Command The command name of the task. When reporting global statistics for tasks and all their children, the following values may be displayed: UID The real user identification number of the task which is being monitored together with its children. USER The name of the real user owning the task which is being monitored together with its children. PID The identification number of the task which is being monitored together with its children. usr-ms Total number of milliseconds spent by the task and all its children while executing at the user level (application), with or without nice priority, and collected during the interval of time. Note that this field does NOT include time spent running a virtual processor. system-ms Total number of milliseconds spent by the task and all its children while executing at the system level (kernel), and collected during the interval of time. guest-ms Total number of milliseconds spent by the task and all its children in virtual machine (running a virtual processor). Command The command name of the task which is being monitored together with its children. -V Print version number then exit. -v Report values of some kernel tables. The following values may be displayed: UID The real user identification number of the task being monitored. USER The name of the real user owning the task being monitored. PID The identification number of the task being monitored. threads Number of threads associated with current task. fd-nr Number of file descriptors associated with current task. Command The command name of the task. -w Report task switching activity (kernels 2.6.23 and later only). The following values may be displayed: UID The real user identification number of the task being monitored. USER The name of the real user owning the task being monitored. PID The identification number of the task being monitored. cswch/s Total number of voluntary context switches the task made per second. A voluntary context switch occurs when a task blocks because it requires a resource that is unavailable. nvcswch/s Total number of non voluntary context switches the task made per second. An involuntary context switch takes place when a task executes for the duration of its time slice and then is forced to relinquish the processor. Command The command name of the task.
# pidstat > Show system resource usage, including CPU, memory, IO etc. More information: > https://manned.org/pidstat. * Show CPU statistics at a 2 second interval for 10 times: `pidstat {{2}} {{10}}` * Show page faults and memory utilization: `pidstat -r` * Show input/output usage per process id: `pidstat -d` * Show information on a specific PID: `pidstat -p {{PID}}` * Show memory statistics for all processes whose command name include "fox" or "bird": `pidstat -C "{{fox|bird}}" -r -p ALL`
git-stash
Use git stash when you want to record the current state of the working directory and the index, but want to go back to a clean working directory. The command saves your local modifications away and reverts the working directory to match the HEAD commit. The modifications stashed away by this command can be listed with git stash list, inspected with git stash show, and restored (potentially on top of a different commit) with git stash apply. Calling git stash without any arguments is equivalent to git stash push. A stash is by default listed as "WIP on branchname ...", but you can give a more descriptive message on the command line when you create one. The latest stash you created is stored in refs/stash; older stashes are found in the reflog of this reference and can be named using the usual reflog syntax (e.g. stash@{0} is the most recently created stash, stash@{1} is the one before it, stash@{2.hours.ago} is also possible). Stashes may also be referenced by specifying just the stash index (e.g. the integer n is equivalent to stash@{n}). -a, --all This option is only valid for push and save commands. All ignored and untracked files are also stashed and then cleaned up with git clean. -u, --include-untracked, --no-include-untracked When used with the push and save commands, all untracked files are also stashed and then cleaned up with git clean. When used with the show command, show the untracked files in the stash entry as part of the diff. --only-untracked This option is only valid for the show command. Show only the untracked files in the stash entry as part of the diff. --index This option is only valid for pop and apply commands. Tries to reinstate not only the working tree’s changes, but also the index’s ones. However, this can fail, when you have conflicts (which are stored in the index, where you therefore can no longer apply the changes as they were originally). -k, --keep-index, --no-keep-index This option is only valid for push and save commands. All changes already added to the index are left intact. -p, --patch This option is only valid for push and save commands. Interactively select hunks from the diff between HEAD and the working tree to be stashed. The stash entry is constructed such that its index state is the same as the index state of your repository, and its worktree contains only the changes you selected interactively. The selected changes are then rolled back from your worktree. See the “Interactive Mode” section of git-add(1) to learn how to operate the --patch mode. The --patch option implies --keep-index. You can use --no-keep-index to override this. -S, --staged This option is only valid for push and save commands. Stash only the changes that are currently staged. This is similar to basic git commit except the state is committed to the stash instead of current branch. The --patch option has priority over this one. --pathspec-from-file=<file> This option is only valid for push command. Pathspec is passed in <file> instead of commandline args. If <file> is exactly - then standard input is used. Pathspec elements are separated by LF or CR/LF. Pathspec elements can be quoted as explained for the configuration variable core.quotePath (see git-config(1)). See also --pathspec-file-nul and global --literal-pathspecs. --pathspec-file-nul This option is only valid for push command. Only meaningful with --pathspec-from-file. Pathspec elements are separated with NUL character and all other characters are taken literally (including newlines and quotes). -q, --quiet This option is only valid for apply, drop, pop, push, save, store commands. Quiet, suppress feedback messages. -- This option is only valid for push command. Separates pathspec from options for disambiguation purposes. <pathspec>... This option is only valid for push command. The new stash entry records the modified states only for the files that match the pathspec. The index entries and working tree files are then rolled back to the state in HEAD only for these files, too, leaving files that do not match the pathspec intact. For more details, see the pathspec entry in gitglossary(7). <stash> This option is only valid for apply, branch, drop, pop, show commands. A reference of the form stash@{<revision>}. When no <stash> is given, the latest stash is assumed (that is, stash@{0}).
# git stash > Stash local Git changes in a temporary area. More information: https://git- > scm.com/docs/git-stash. * Stash current changes, except new (untracked) files: `git stash push -m {{optional_stash_message}}` * Stash current changes, including new (untracked) files: `git stash -u` * Interactively select parts of changed files for stashing: `git stash -p` * List all stashes (shows stash name, related branch and message): `git stash list` * Show the changes as a patch between the stash (default is stash@{0}) and the commit back when stash entry was first created: `git stash show -p {{stash@{0}}}` * Apply a stash (default is the latest, named stash@{0}): `git stash apply {{optional_stash_name_or_commit}}` * Drop or apply a stash (default is stash@{0}) and remove it from the stash list if applying doesn't cause conflicts: `git stash pop {{optional_stash_name}}` * Drop all stashes: `git stash clear`
git-bisect
The command takes various subcommands, and different options depending on the subcommand: git bisect start [--term-{new,bad}=<term> --term-{old,good}=<term>] [--no-checkout] [--first-parent] [<bad> [<good>...]] [--] [<paths>...] git bisect (bad|new|<term-new>) [<rev>] git bisect (good|old|<term-old>) [<rev>...] git bisect terms [--term-good | --term-bad] git bisect skip [(<rev>|<range>)...] git bisect reset [<commit>] git bisect (visualize|view) git bisect replay <logfile> git bisect log git bisect run <cmd>... git bisect help This command uses a binary search algorithm to find which commit in your project’s history introduced a bug. You use it by first telling it a "bad" commit that is known to contain the bug, and a "good" commit that is known to be before the bug was introduced. Then git bisect picks a commit between those two endpoints and asks you whether the selected commit is "good" or "bad". It continues narrowing down the range until it finds the exact commit that introduced the change. In fact, git bisect can be used to find the commit that changed any property of your project; e.g., the commit that fixed a bug, or the commit that caused a benchmark’s performance to improve. To support this more general usage, the terms "old" and "new" can be used in place of "good" and "bad", or you can choose your own terms. See section "Alternate terms" below for more information. Basic bisect commands: start, bad, good As an example, suppose you are trying to find the commit that broke a feature that was known to work in version v2.6.13-rc2 of your project. You start a bisect session as follows: $ git bisect start $ git bisect bad # Current version is bad $ git bisect good v2.6.13-rc2 # v2.6.13-rc2 is known to be good Once you have specified at least one bad and one good commit, git bisect selects a commit in the middle of that range of history, checks it out, and outputs something similar to the following: Bisecting: 675 revisions left to test after this (roughly 10 steps) You should now compile the checked-out version and test it. If that version works correctly, type $ git bisect good If that version is broken, type $ git bisect bad Then git bisect will respond with something like Bisecting: 337 revisions left to test after this (roughly 9 steps) Keep repeating the process: compile the tree, test it, and depending on whether it is good or bad run git bisect good or git bisect bad to ask for the next commit that needs testing. Eventually there will be no more revisions left to inspect, and the command will print out a description of the first bad commit. The reference refs/bisect/bad will be left pointing at that commit. Bisect reset After a bisect session, to clean up the bisection state and return to the original HEAD, issue the following command: $ git bisect reset By default, this will return your tree to the commit that was checked out before git bisect start. (A new git bisect start will also do that, as it cleans up the old bisection state.) With an optional argument, you can return to a different commit instead: $ git bisect reset <commit> For example, git bisect reset bisect/bad will check out the first bad revision, while git bisect reset HEAD will leave you on the current bisection commit and avoid switching commits at all. Alternate terms Sometimes you are not looking for the commit that introduced a breakage, but rather for a commit that caused a change between some other "old" state and "new" state. For example, you might be looking for the commit that introduced a particular fix. Or you might be looking for the first commit in which the source-code filenames were finally all converted to your company’s naming standard. Or whatever. In such cases it can be very confusing to use the terms "good" and "bad" to refer to "the state before the change" and "the state after the change". So instead, you can use the terms "old" and "new", respectively, in place of "good" and "bad". (But note that you cannot mix "good" and "bad" with "old" and "new" in a single session.) In this more general usage, you provide git bisect with a "new" commit that has some property and an "old" commit that doesn’t have that property. Each time git bisect checks out a commit, you test if that commit has the property. If it does, mark the commit as "new"; otherwise, mark it as "old". When the bisection is done, git bisect will report which commit introduced the property. To use "old" and "new" instead of "good" and bad, you must run git bisect start without commits as argument and then run the following commands to add the commits: git bisect old [<rev>] to indicate that a commit was before the sought change, or git bisect new [<rev>...] to indicate that it was after. To get a reminder of the currently used terms, use git bisect terms You can get just the old (respectively new) term with git bisect terms --term-old or git bisect terms --term-good. If you would like to use your own terms instead of "bad"/"good" or "new"/"old", you can choose any names you like (except existing bisect subcommands like reset, start, ...) by starting the bisection using git bisect start --term-old <term-old> --term-new <term-new> For example, if you are looking for a commit that introduced a performance regression, you might use git bisect start --term-old fast --term-new slow Or if you are looking for the commit that fixed a bug, you might use git bisect start --term-new fixed --term-old broken Then, use git bisect <term-old> and git bisect <term-new> instead of git bisect good and git bisect bad to mark commits. Bisect visualize/view To see the currently remaining suspects in gitk, issue the following command during the bisection process (the subcommand view can be used as an alternative to visualize): $ git bisect visualize If the DISPLAY environment variable is not set, git log is used instead. You can also give command-line options such as -p and --stat. $ git bisect visualize --stat Bisect log and bisect replay After having marked revisions as good or bad, issue the following command to show what has been done so far: $ git bisect log If you discover that you made a mistake in specifying the status of a revision, you can save the output of this command to a file, edit it to remove the incorrect entries, and then issue the following commands to return to a corrected state: $ git bisect reset $ git bisect replay that-file Avoiding testing a commit If, in the middle of a bisect session, you know that the suggested revision is not a good one to test (e.g. it fails to build and you know that the failure does not have anything to do with the bug you are chasing), you can manually select a nearby commit and test that one instead. For example: $ git bisect good/bad # previous round was good or bad. Bisecting: 337 revisions left to test after this (roughly 9 steps) $ git bisect visualize # oops, that is uninteresting. $ git reset --hard HEAD~3 # try 3 revisions before what # was suggested Then compile and test the chosen revision, and afterwards mark the revision as good or bad in the usual manner. Bisect skip Instead of choosing a nearby commit by yourself, you can ask Git to do it for you by issuing the command: $ git bisect skip # Current version cannot be tested However, if you skip a commit adjacent to the one you are looking for, Git will be unable to tell exactly which of those commits was the first bad one. You can also skip a range of commits, instead of just one commit, using range notation. For example: $ git bisect skip v2.5..v2.6 This tells the bisect process that no commit after v2.5, up to and including v2.6, should be tested. Note that if you also want to skip the first commit of the range you would issue the command: $ git bisect skip v2.5 v2.5..v2.6 This tells the bisect process that the commits between v2.5 and v2.6 (inclusive) should be skipped. Cutting down bisection by giving more parameters to bisect start You can further cut down the number of trials, if you know what part of the tree is involved in the problem you are tracking down, by specifying path parameters when issuing the bisect start command: $ git bisect start -- arch/i386 include/asm-i386 If you know beforehand more than one good commit, you can narrow the bisect space down by specifying all of the good commits immediately after the bad commit when issuing the bisect start command: $ git bisect start v2.6.20-rc6 v2.6.20-rc4 v2.6.20-rc1 -- # v2.6.20-rc6 is bad # v2.6.20-rc4 and v2.6.20-rc1 are good Bisect run If you have a script that can tell if the current source code is good or bad, you can bisect by issuing the command: $ git bisect run my_script arguments Note that the script (my_script in the above example) should exit with code 0 if the current source code is good/old, and exit with a code between 1 and 127 (inclusive), except 125, if the current source code is bad/new. Any other exit code will abort the bisect process. It should be noted that a program that terminates via exit(-1) leaves $? = 255, (see the exit(3) manual page), as the value is chopped with & 0377. The special exit code 125 should be used when the current source code cannot be tested. If the script exits with this code, the current revision will be skipped (see git bisect skip above). 125 was chosen as the highest sensible value to use for this purpose, because 126 and 127 are used by POSIX shells to signal specific error status (127 is for command not found, 126 is for command found but not executable—these details do not matter, as they are normal errors in the script, as far as bisect run is concerned). You may often find that during a bisect session you want to have temporary modifications (e.g. s/#define DEBUG 0/#define DEBUG 1/ in a header file, or "revision that does not have this commit needs this patch applied to work around another problem this bisection is not interested in") applied to the revision being tested. To cope with such a situation, after the inner git bisect finds the next revision to test, the script can apply the patch before compiling, run the real test, and afterwards decide if the revision (possibly with the needed patch) passed the test and then rewind the tree to the pristine state. Finally the script should exit with the status of the real test to let the git bisect run command loop determine the eventual outcome of the bisect session. --no-checkout Do not checkout the new working tree at each iteration of the bisection process. Instead just update a special reference named BISECT_HEAD to make it point to the commit that should be tested. This option may be useful when the test you would perform in each step does not require a checked out tree. If the repository is bare, --no-checkout is assumed. --first-parent Follow only the first parent commit upon seeing a merge commit. In detecting regressions introduced through the merging of a branch, the merge commit will be identified as introduction of the bug and its ancestors will be ignored. This option is particularly useful in avoiding false positives when a merged branch contained broken or non-buildable commits, but the merge itself was OK.
# git bisect > Use binary search to find the commit that introduced a bug. Git > automatically jumps back and forth in the commit graph to progressively > narrow down the faulty commit. More information: https://git- > scm.com/docs/git-bisect. * Start a bisect session on a commit range bounded by a known buggy commit, and a known clean (typically older) one: `git bisect start {{bad_commit}} {{good_commit}}` * For each commit that `git bisect` selects, mark it as "bad" or "good" after testing it for the issue: `git bisect {{good|bad}}` * After `git bisect` pinpoints the faulty commit, end the bisect session and return to the previous branch: `git bisect reset` * Skip a commit during a bisect (e.g. one that fails the tests due to a different issue): `git bisect skip` * Display a log of what has been done so far: `git bisect log`
systemd-ac-power
systemd-ac-power may be used to check whether the system is running on AC power or not. By default it will simply return success (if we can detect that we are running on AC power) or failure, with no output. This can be useful for example to debug ConditionACPower= (see systemd.unit(5)). The following options are understood: -v, --verbose Show result as text instead of just returning success or failure. --low Instead of showing AC power state, show low battery state. In this case will return zero if all batteries are currently discharging and below 5% of maximum charge. Returns non-zero otherwise. -h, --help Print a short help text and exit. --version Print a short version string and exit.
# systemd-ac-power > Report whether the computer is connected to an external power source. More > information: https://www.freedesktop.org/software/systemd/man/systemd-ac- > power.html. * Silently check and return a 0 status code when running on AC power, and a non-zero code otherwise: `systemd-ac-power` * Additionally print `yes` or `no` to `stdout`: `systemd-ac-power --verbose`
getopt
getopt is used to break up (parse) options in command lines for easy parsing by shell procedures, and to check for valid options. It uses the GNU getopt(3) routines to do this. The parameters getopt is called with can be divided into two parts: options which modify the way getopt will do the parsing (the options and the optstring in the SYNOPSIS), and the parameters which are to be parsed (parameters in the SYNOPSIS). The second part will start at the first non-option parameter that is not an option argument, or after the first occurrence of '--'. If no '-o' or '--options' option is found in the first part, the first parameter of the second part is used as the short options string. If the environment variable GETOPT_COMPATIBLE is set, or if the first parameter is not an option (does not start with a '-', the first format in the SYNOPSIS), getopt will generate output that is compatible with that of other versions of getopt(1). It will still do parameter shuffling and recognize optional arguments (see the COMPATIBILITY section for more information). Traditional implementations of getopt(1) are unable to cope with whitespace and other (shell-specific) special characters in arguments and non-option parameters. To solve this problem, this implementation can generate quoted output which must once again be interpreted by the shell (usually by using the eval command). This has the effect of preserving those characters, but you must call getopt in a way that is no longer compatible with other versions (the second or third format in the SYNOPSIS). To determine whether this enhanced version of getopt(1) is installed, a special test option (-T) can be used. -a, --alternative Allow long options to start with a single '-'. -l, --longoptions longopts The long (multi-character) options to be recognized. More than one option name may be specified at once, by separating the names with commas. This option may be given more than once, the longopts are cumulative. Each long option name in longopts may be followed by one colon to indicate it has a required argument, and by two colons to indicate it has an optional argument. -n, --name progname The name that will be used by the getopt(3) routines when it reports errors. Note that errors of getopt(1) are still reported as coming from getopt. -o, --options shortopts The short (one-character) options to be recognized. If this option is not found, the first parameter of getopt that does not start with a '-' (and is not an option argument) is used as the short options string. Each short option character in shortopts may be followed by one colon to indicate it has a required argument, and by two colons to indicate it has an optional argument. The first character of shortopts may be '+' or '-' to influence the way options are parsed and output is generated (see the SCANNING MODES section for details). -q, --quiet Disable error reporting by getopt(3). -Q, --quiet-output Do not generate normal output. Errors are still reported by getopt(3), unless you also use -q. -s, --shell shell Set quoting conventions to those of shell. If the -s option is not given, the BASH conventions are used. Valid arguments are currently 'sh', 'bash', 'csh', and 'tcsh'. -T, --test Test if your getopt(1) is this enhanced version or an old version. This generates no output, and sets the error status to 4. Other implementations of getopt(1), and this version if the environment variable GETOPT_COMPATIBLE is set, will return '--' and error status 0. -u, --unquoted Do not quote the output. Note that whitespace and special (shell-dependent) characters can cause havoc in this mode (like they do with other getopt(1) implementations). -h, --help Display help text and exit. -V, --version Print version and exit.
# getopt > Parse command-line arguments. More information: > https://www.gnu.org/software/libc/manual/html_node/Getopt.html. * Parse optional `verbose`/`version` flags with shorthands: `getopt --options vV --longoptions verbose,version -- --version --verbose` * Add a `--file` option with a required argument with shorthand `-f`: `getopt --options f: --longoptions file: -- --file=somefile` * Add a `--verbose` option with an optional argument with shorthand `-v`, and pass a non-option parameter `arg`: `getopt --options v:: --longoptions verbose:: -- --verbose arg` * Accept a `-r` and `--verbose` flag, a `--accept` option with an optional argument and add a `--target` with a required argument option with shorthands: `getopt --options rv::s::t: --longoptions verbose,source::,target: -- -v --target target`
pkill
pgrep looks through the currently running processes and lists the process IDs which match the selection criteria to stdout. All the criteria have to match. For example, $ pgrep -u root sshd will only list the processes called sshd AND owned by root. On the other hand, $ pgrep -u root,daemon will list the processes owned by root OR daemon. pkill will send the specified signal (by default SIGTERM) to each process instead of listing them on stdout. pidwait will wait for each process instead of listing them on stdout. -signal --signal signal Defines the signal to send to each matched process. Either the numeric or the symbolic signal name can be used. In pgrep or pidwait mode only the long option can be used and has no effect unless used in conjunction with --require-handler to filter to processes with a userspace signal handler present for a particular signal. -c, --count Suppress normal output; instead print a count of matching processes. When count does not match anything, e.g. returns zero, the command will return non-zero value. Note that for pkill and pidwait, the count is the number of matching processes, not the processes that were successfully signaled or waited for. -d, --delimiter delimiter Sets the string used to delimit each process ID in the output (by default a newline). (pgrep only.) -e, --echo Display name and PID of the process being killed. (pkill only.) -f, --full The pattern is normally only matched against the process name. When -f is set, the full command line is used. -g, --pgroup pgrp,... Only match processes in the process group IDs listed. Process group 0 is translated into pgrep's, pkill's, or pidwait's own process group. -G, --group gid,... Only match processes whose real group ID is listed. Either the numerical or symbolical value may be used. -i, --ignore-case Match processes case-insensitively. -l, --list-name List the process name as well as the process ID. (pgrep only.) -a, --list-full List the full command line as well as the process ID. (pgrep only.) -n, --newest Select only the newest (most recently started) of the matching processes. -o, --oldest Select only the oldest (least recently started) of the matching processes. -O, --older secs Select processes older than secs. -P, --parent ppid,... Only match processes whose parent process ID is listed. -s, --session sid,... Only match processes whose process session ID is listed. Session ID 0 is translated into pgrep's, pkill's, or pidwait's own session ID. -t, --terminal term,... Only match processes whose controlling terminal is listed. The terminal name should be specified without the "/dev/" prefix. -u, --euid euid,... Only match processes whose effective user ID is listed. Either the numerical or symbolical value may be used. -U, --uid uid,... Only match processes whose real user ID is listed. Either the numerical or symbolical value may be used. -v, --inverse Negates the matching. This option is usually used in pgrep's or pidwait's context. In pkill's context the short option is disabled to avoid accidental usage of the option. -w, --lightweight Shows all thread ids instead of pids in pgrep's or pidwait's context. In pkill's context this option is disabled. -x, --exact Only match processes whose names (or command lines if -f is specified) exactly match the pattern. -F, --pidfile file Read PIDs from file. This option is more useful for pkill or pidwait than pgrep. -L, --logpidfile Fail if pidfile (see -F) not locked. -r, --runstates D,R,S,Z,... Match only processes which match the process state. -A, --ignore-ancestors Ignore all ancestors of pgrep, pkill, or pidwait. For example, this can be useful when elevating with sudo or similar tools. -H, --require-handler Only match processes with a userspace signal handler present for the signal to be sent. --cgroup name,... Match on provided control group (cgroup) v2 name. See cgroups(8) --ns pid Match processes that belong to the same namespaces. Required to run as root to match processes from other users. See --nslist for how to limit which namespaces to match. --nslist name,... Match only the provided namespaces. Available namespaces: ipc, mnt, net, pid, user, uts. -q, --queue value Use sigqueue(3) rather than kill(2) and the value argument is used to specify an integer to be sent with the signal. If the receiving process has installed a handler for this signal using the SA_SIGINFO flag to sigaction(2), then it can obtain this data via the si_value field of the siginfo_t structure. -V, --version Display version information and exit. -h, --help Display help and exit.
# pkill > Signal process by name. Mostly used for stopping processes. More > information: https://www.man7.org/linux/man-pages/man1/pkill.1.html. * Kill all processes which match: `pkill "{{process_name}}"` * Kill all processes which match their full command instead of just the process name: `pkill -f "{{command_name}}"` * Force kill matching processes (can't be blocked): `pkill -9 "{{process_name}}"` * Send SIGUSR1 signal to processes which match: `pkill -USR1 "{{process_name}}"` * Kill the main `firefox` process to close the browser: `pkill --oldest "{{firefox}}"`
ssh-keyscan
ssh-keyscan is a utility for gathering the public SSH host keys of a number of hosts. It was designed to aid in building and verifying ssh_known_hosts files, the format of which is documented in sshd(8). ssh-keyscan provides a minimal interface suitable for use by shell and perl scripts. ssh-keyscan uses non-blocking socket I/O to contact as many hosts as possible in parallel, so it is very efficient. The keys from a domain of 1,000 hosts can be collected in tens of seconds, even when some of those hosts are down or do not run sshd(8). For scanning, one does not need login access to the machines that are being scanned, nor does the scanning process involve any encryption. Hosts to be scanned may be specified by hostname, address or by CIDR network range (e.g. 192.168.16/28). If a network range is specified, then all addresses in that range will be scanned. The options are as follows: -4 Force ssh-keyscan to use IPv4 addresses only. -6 Force ssh-keyscan to use IPv6 addresses only. -c Request certificates from target hosts instead of plain keys. -D Print keys found as SSHFP DNS records. The default is to print keys in a format usable as a ssh(1) known_hosts file. -f file Read hosts or “addrlist namelist” pairs from file, one per line. If ‘-’ is supplied instead of a filename, ssh-keyscan will read from the standard input. Names read from a file must start with an address, hostname or CIDR network range to be scanned. Addresses and hostnames may optionally be followed by comma-separated name or address aliases that will be copied to the output. For example: 192.168.11.0/24 10.20.1.1 happy.example.org 10.0.0.1,sad.example.org -H Hash all hostnames and addresses in the output. Hashed names may be used normally by ssh(1) and sshd(8), but they do not reveal identifying information should the file's contents be disclosed. -O option Specify a key/value option. At present, only a single option is supported: hashalg=algorithm Selects a hash algorithm to use when printing SSHFP records using the -D flag. Valid algorithms are “sha1” and “sha256”. The default is to print both. -p port Connect to port on the remote host. -T timeout Set the timeout for connection attempts. If timeout seconds have elapsed since a connection was initiated to a host or since the last time anything was read from that host, the connection is closed and the host in question considered unavailable. The default is 5 seconds. -t type Specify the type of the key to fetch from the scanned hosts. The possible values are “dsa”, “ecdsa”, “ed25519”, “ecdsa-sk”, “ed25519-sk”, or “rsa”. Multiple values may be specified by separating them with commas. The default is to fetch “rsa”, “ecdsa”, “ed25519”, “ecdsa-sk”, and “ed25519-sk” keys. -v Verbose mode: print debugging messages about progress. If an ssh_known_hosts file is constructed using ssh-keyscan without verifying the keys, users will be vulnerable to man in the middle attacks. On the other hand, if the security model allows such a risk, ssh-keyscan can help in the detection of tampered keyfiles or man in the middle attacks which have begun after the ssh_known_hosts file was created.
# ssh-keyscan > Get the public ssh keys of remote hosts. More information: > https://man.openbsd.org/ssh-keyscan. * Retrieve all public ssh keys of a remote host: `ssh-keyscan {{host}}` * Retrieve all public ssh keys of a remote host listening on a specific port: `ssh-keyscan -p {{port}} {{host}}` * Retrieve certain types of public ssh keys of a remote host: `ssh-keyscan -t {{rsa,dsa,ecdsa,ed25519}} {{host}}` * Manually update the ssh known_hosts file with the fingerprint of a given host: `ssh-keyscan -H {{host}} >> ~/.ssh/known_hosts`
test
The test utility shall evaluate the expression and indicate the result of the evaluation by its exit status. An exit status of zero indicates that the expression evaluated as true and an exit status of 1 indicates that the expression evaluated as false. In the second form of the utility, where the utility name used is [ rather than test, the application shall ensure that the closing square bracket is a separate argument. The test and [ utilities may be implemented as a single linked utility which examines the basename of the zeroth command line argument to determine whether to behave as the test or [ variant. Applications using the exec() family of functions to execute these utilities shall ensure that the argument passed in arg0 or argv[0] is '[' when executing the [ utility and has a basename of "test" when executing the test utility. The test utility shall not recognize the "--" argument in the manner specified by Guideline 10 in the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. No options shall be supported.
# test > Check file types and compare values. Returns 0 if the condition evaluates to > true, 1 if it evaluates to false. More information: > https://www.gnu.org/software/coreutils/test. * Test if a given variable is equal to a given string: `test "{{$MY_VAR}}" == "{{/bin/zsh}}"` * Test if a given variable is empty: `test -z "{{$GIT_BRANCH}}"` * Test if a file exists: `test -f "{{path/to/file_or_directory}}"` * Test if a directory does not exist: `test ! -d "{{path/to/directory}}"` * If A is true, then do B, or C in the case of an error (notice that C may run even if A fails): `test {{condition}} && {{echo "true"}} || {{echo "false"}}`
systemd-notify
systemd-notify may be called by service scripts to notify the invoking service manager about status changes. It can be used to send arbitrary information, encoded in an environment-block-like list of strings. Most importantly, it can be used for start-up completion notification. This is mostly just a wrapper around sd_notify() and makes this functionality available to shell scripts. For details see sd_notify(3). The command line may carry a list of environment variables to send as part of the status update. Note that systemd will refuse reception of status updates from this command unless NotifyAccess= is appropriately set for the service unit this command is called from. See systemd.service(5) for details. Note that sd_notify() notifications may be attributed to units correctly only if either the sending process is still around at the time the service manager processes the message, or if the sending process is explicitly runtime-tracked by the service manager. The latter is the case if the service manager originally forked off the process, i.e. on all processes that match NotifyAccess=main or NotifyAccess=exec. Conversely, if an auxiliary process of the unit sends an sd_notify() message and immediately exits, the service manager might not be able to properly attribute the message to the unit, and thus will ignore it, even if NotifyAccess=all is set for it. To address this systemd-notify will wait until the notification message has been processed by the service manager. When --no-block is used, this synchronization for reception of notifications is disabled, and hence the aforementioned race may occur if the invoking process is not the service manager or spawned by the service manager. systemd-notify will first attempt to invoke sd_notify() pretending to have the PID of the parent process of systemd-notify (i.e. the invoking process). This will only succeed when invoked with sufficient privileges. On failure, it will then fall back to invoking it under its own PID. This behaviour is useful in order that when the tool is invoked from a shell script the shell process — and not the systemd-notify process — appears as sender of the message, which in turn is helpful if the shell process is the main process of a service, due to the limitations of NotifyAccess=all. Use the --pid= switch to tweak this behaviour. The following options are understood: --ready Inform the invoking service manager about service start-up or configuration reload completion. This is equivalent to systemd-notify READY=1. For details about the semantics of this option see sd_notify(3). --reloading Inform the invoking service manager about the beginning of a configuration reload cycle. This is equivalent to systemd-notify RELOADING=1 (but implicitly also sets a MONOTONIC_USEC= field as required for Type=notify-reload services, see systemd.service(5) for details). For details about the semantics of this option see sd_notify(3). --stopping Inform the invoking service manager about the beginning of the shutdown phase of the service. This is equivalent to systemd-notify STOPPING=1. For details about the semantics of this option see sd_notify(3). --pid= Inform the service manager about the main PID of the service. Takes a PID as argument. If the argument is specified as "auto" or omitted, the PID of the process that invoked systemd-notify is used, except if that's the service manager. If the argument is specified as "self", the PID of the systemd-notify command itself is used, and if "parent" is specified the calling process' PID is used — even if it is the service manager. The latter is equivalent to systemd-notify MAINPID=$PID. For details about the semantics of this option see sd_notify(3). If this switch is used in an systemd-notify invocation from a process that shall become the new main process of a service — and which is not the process forked off by the service manager (or the current main process) —, then it is essential to set NotifyAccess=all in the service unit file, or otherwise the notification will be ignored for security reasons. See systemd.service(5) for details. --uid=USER Set the user ID to send the notification from. Takes a UNIX user name or numeric UID. When specified the notification message will be sent with the specified UID as sender, in place of the user the command was invoked as. This option requires sufficient privileges in order to be able manipulate the user identity of the process. --status= Send a free-form human readable status string for the daemon to the service manager. This option takes the status string as argument. This is equivalent to systemd-notify STATUS=.... For details about the semantics of this option see sd_notify(3). This information is shown in systemctl(1)'s status output, among other places. --booted Returns 0 if the system was booted up with systemd, non-zero otherwise. If this option is passed, no message is sent. This option is hence unrelated to the other options. For details about the semantics of this option, see sd_booted(3). An alternate way to check for this state is to call systemctl(1) with the is-system-running command. It will return "offline" if the system was not booted with systemd. --no-block Do not synchronously wait for the requested operation to finish. Use of this option is only recommended when systemd-notify is spawned by the service manager, or when the invoking process is directly spawned by the service manager and has enough privileges to allow systemd-notify to send the notification on its behalf. Sending notifications with this option set is prone to race conditions in all other cases. --exec If specified systemd-notify will execute another command line after it completed its operation, replacing its own process. If used, the list of assignments to include in the message sent must be followed by a ";" character (as separate argument), followed by the command line to execute. This permits "chaining" of commands, i.e. issuing one operation, followed immediately by another, without changing PIDs. Note that many shells interpret ";" as their own separator for command lines, hence when systemd-notify is invoked from a shell the semicolon must usually be escaped as "\;". --fd= Send a file descriptor along with the notification message. This is useful when invoked in services that have the FileDescriptorStoreMax= setting enabled, see systemd.service(5) for details. The specified file descriptor must be passed to systemd-notify when invoked. This option may be used multiple times to pass multiple file descriptors in a single notification message. To use this functionality from a bash shell, use an expression like the following: systemd-notify --fd=4 --fd=5 4</some/file 5</some/other/file --fdname= Set a name to assign to the file descriptors passed via --fd= (see above). This controls the "FDNAME=" field. This setting may only be specified once, and applies to all file descriptors passed. Invoke this tool multiple times in case multiple file descriptors with different file descriptor names shall be submitted. -h, --help Print a short help text and exit. --version Print a short version string and exit.
# systemd-notify > Notify the service manager about start-up completion and other daemon status > changes. This command is useless outside systemd service scripts. More > information: https://www.freedesktop.org/software/systemd/man/systemd- > notify.html. * Notify systemd that the service has completed its initialization and is fully started. It should be invoked when the service is ready to accept incoming requests: `systemd-notify --booted` * Signal to systemd that the service is ready to handle incoming connections or perform its tasks: `systemd-notify --ready` * Provide a custom status message to systemd (this information is shown by `systemctl status`): `systemd-notify --status="{{Add custom status message here...}}"`
pr
Paginate or columnate FILE(s) for printing. With no FILE, or when FILE is -, read standard input. Mandatory arguments to long options are mandatory for short options too. +FIRST_PAGE[:LAST_PAGE], --pages=FIRST_PAGE[:LAST_PAGE] begin [stop] printing with page FIRST_[LAST_]PAGE -COLUMN, --columns=COLUMN output COLUMN columns and print columns down, unless -a is used. Balance number of lines in the columns on each page -a, --across print columns across rather than down, used together with -COLUMN -c, --show-control-chars use hat notation (^G) and octal backslash notation -d, --double-space double space the output -D, --date-format=FORMAT use FORMAT for the header date -e[CHAR[WIDTH]], --expand-tabs[=CHAR[WIDTH]] expand input CHARs (TABs) to tab WIDTH (8) -F, -f, --form-feed use form feeds instead of newlines to separate pages (by a 3-line page header with -F or a 5-line header and trailer without -F) -h, --header=HEADER use a centered HEADER instead of filename in page header, -h "" prints a blank line, don't use -h"" -i[CHAR[WIDTH]], --output-tabs[=CHAR[WIDTH]] replace spaces with CHARs (TABs) to tab WIDTH (8) -J, --join-lines merge full lines, turns off -W line truncation, no column alignment, --sep-string[=STRING] sets separators -l, --length=PAGE_LENGTH set the page length to PAGE_LENGTH (66) lines (default number of lines of text 56, and with -F 63). implies -t if PAGE_LENGTH <= 10 -m, --merge print all files in parallel, one in each column, truncate lines, but join lines of full length with -J -n[SEP[DIGITS]], --number-lines[=SEP[DIGITS]] number lines, use DIGITS (5) digits, then SEP (TAB), default counting starts with 1st line of input file -N, --first-line-number=NUMBER start counting with NUMBER at 1st line of first page printed (see +FIRST_PAGE) -o, --indent=MARGIN offset each line with MARGIN (zero) spaces, do not affect -w or -W, MARGIN will be added to PAGE_WIDTH -r, --no-file-warnings omit warning when a file cannot be opened -s[CHAR], --separator[=CHAR] separate columns by a single character, default for CHAR is the <TAB> character without -w and 'no char' with -w. -s[CHAR] turns off line truncation of all 3 column options (-COLUMN|-a -COLUMN|-m) except -w is set -S[STRING], --sep-string[=STRING] separate columns by STRING, without -S: Default separator <TAB> with -J and <space> otherwise (same as -S" "), no effect on column options -t, --omit-header omit page headers and trailers; implied if PAGE_LENGTH <= 10 -T, --omit-pagination omit page headers and trailers, eliminate any pagination by form feeds set in input files -v, --show-nonprinting use octal backslash notation -w, --width=PAGE_WIDTH set page width to PAGE_WIDTH (72) characters for multiple text-column output only, -s[char] turns off (72) -W, --page-width=PAGE_WIDTH set page width to PAGE_WIDTH (72) characters always, truncate lines, except -J option is set, no interference with -S or -s --help display this help and exit --version output version information and exit
# pr > Paginate or columnate files for printing. More information: > https://www.gnu.org/software/coreutils/pr. * Print multiple files with a default header and footer: `pr {{file1}} {{file2}} {{file3}}` * Print with a custom centered header: `pr -h "{{header}}" {{file1}} {{file2}} {{file3}}` * Print with numbered lines and a custom date format: `pr -n -D "{{format}}" {{file1}} {{file2}} {{file3}}` * Print all files together, one in each column, without a header or footer: `pr -m -T {{file1}} {{file2}} {{file3}}` * Print, beginning at page 2 up to page 5, with a given page length (including header and footer): `pr +{{2}}:{{5}} -l {{page_length}} {{file1}} {{file2}} {{file3}}` * Print with an offset for each line and a truncating custom page width: `pr -o {{offset}} -W {{width}} {{file1}} {{file2}} {{file3}}`
git-symbolic-ref
Given one argument, reads which branch head the given symbolic ref refers to and outputs its path, relative to the .git/ directory. Typically you would give HEAD as the <name> argument to see which branch your working tree is on. Given two arguments, creates or updates a symbolic ref <name> to point at the given branch <ref>. Given --delete and an additional argument, deletes the given symbolic ref. A symbolic ref is a regular file that stores a string that begins with ref: refs/. For example, your .git/HEAD is a regular file whose contents is ref: refs/heads/master. -d, --delete Delete the symbolic ref <name>. -q, --quiet Do not issue an error message if the <name> is not a symbolic ref but a detached HEAD; instead exit with non-zero status silently. --short When showing the value of <name> as a symbolic ref, try to shorten the value, e.g. from refs/heads/master to master. --recurse, --no-recurse When showing the value of <name> as a symbolic ref, if <name> refers to another symbolic ref, follow such a chain of symbolic refs until the result no longer points at a symbolic ref (--recurse, which is the default). --no-recurse stops after dereferencing only a single level of symbolic ref. -m Update the reflog for <name> with <reason>. This is valid only when creating or updating a symbolic ref.
# git symbolic-ref > Read, change, or delete files that store references. More information: > https://git-scm.com/docs/git-symbolic-ref. * Store a reference by a name: `git symbolic-ref refs/{{name}} {{ref}}` * Store a reference by name, including a message with a reason for the update: `git symbolic-ref -m "{{message}}" refs/{{name}} refs/heads/{{branch_name}}` * Read a reference by name: `git symbolic-ref refs/{{name}}` * Delete a reference by name: `git symbolic-ref --delete refs/{{name}}` * For scripting, hide errors with `--quiet` and use `--short` to simplify ("refs/heads/X" prints as "X"): `git symbolic-ref --quiet --short refs/{{name}}`
tty
Print the file name of the terminal connected to standard input. -s, --silent, --quiet print nothing, only return an exit status --help display this help and exit --version output version information and exit
# tty > Returns terminal name. More information: > https://www.gnu.org/software/coreutils/tty. * Print the file name of this terminal: `tty`
git-instaweb
A simple script to set up gitweb and a web server for browsing the local repository. -l, --local Only bind the web server to the local IP (127.0.0.1). -d, --httpd The HTTP daemon command-line that will be executed. Command-line options may be specified here, and the configuration file will be added at the end of the command-line. Currently apache2, lighttpd, mongoose, plackup, python and webrick are supported. (Default: lighttpd) -m, --module-path The module path (only needed if httpd is Apache). (Default: /usr/lib/apache2/modules) -p, --port The port number to bind the httpd to. (Default: 1234) -b, --browser The web browser that should be used to view the gitweb page. This will be passed to the git web--browse helper script along with the URL of the gitweb instance. See git-web--browse(1) for more information about this. If the script fails, the URL will be printed to stdout. start, --start Start the httpd instance and exit. Regenerate configuration files as necessary for spawning a new instance. stop, --stop Stop the httpd instance and exit. This does not generate any of the configuration files for spawning a new instance, nor does it close the browser. restart, --restart Restart the httpd instance and exit. Regenerate configuration files as necessary for spawning a new instance.
# git instaweb > Helper to launch a GitWeb server. More information: https://git- > scm.com/docs/git-instaweb. * Launch a GitWeb server for the current Git repository: `git instaweb --start` * Listen only on localhost: `git instaweb --start --local` * Listen on a specific port: `git instaweb --start --port {{1234}}` * Use a specified HTTP daemon: `git instaweb --start --httpd {{lighttpd|apache2|mongoose|plackup|webrick}}` * Also auto-launch a web browser: `git instaweb --start --browser` * Stop the currently running GitWeb server: `git instaweb --stop` * Restart the currently running GitWeb server: `git instaweb --restart`
newgrp
The newgrp utility shall create a new shell execution environment with a new real and effective group identification. Of the attributes listed in Section 2.12, Shell Execution Environment, the new shell execution environment shall retain the working directory, file creation mask, and exported variables from the previous environment (that is, open files, traps, unexported variables, alias definitions, shell functions, and set options may be lost). All other aspects of the process environment that are preserved by the exec family of functions defined in the System Interfaces volume of POSIX.1‐2017 shall also be preserved by newgrp; whether other aspects are preserved is unspecified. A failure to assign the new group identifications (for example, for security or password-related reasons) shall not prevent the new shell execution environment from being created. The newgrp utility shall affect the supplemental groups for the process as follows: * On systems where the effective group ID is normally in the supplementary group list (or whenever the old effective group ID actually is in the supplementary group list): -- If the new effective group ID is also in the supplementary group list, newgrp shall change the effective group ID. -- If the new effective group ID is not in the supplementary group list, newgrp shall add the new effective group ID to the list, if there is room to add it. * On systems where the effective group ID is not normally in the supplementary group list (or whenever the old effective group ID is not in the supplementary group list): -- If the new effective group ID is in the supplementary group list, newgrp shall delete it. -- If the old effective group ID is not in the supplementary list, newgrp shall add it if there is room. Note: The System Interfaces volume of POSIX.1‐2017 does not specify whether the effective group ID of a process is included in its supplementary group list. With no operands, newgrp shall change the effective group back to the groups identified in the user's user entry, and shall set the list of supplementary groups to that set in the user's group database entries. If the first argument is '-', the results are unspecified. If a password is required for the specified group, and the user is not listed as a member of that group in the group database, the user shall be prompted to enter the correct password for that group. If the user is listed as a member of that group, no password shall be requested. If no password is required for the specified group, it is implementation-defined whether users not listed as members of that group can change to that group. Whether or not a password is required, implementation-defined system accounting or security mechanisms may impose additional authorization restrictions that may cause newgrp to write a diagnostic message and suppress the changing of the group identification. The newgrp utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines, except for the unspecified usage of '-'. The following option shall be supported: -l (The letter ell.) Change the environment to what would be expected if the user actually logged in again.
# newgrp > Switch primary group membership. More information: > https://manned.org/newgrp. * Change user's primary group membership: `newgrp {{group_name}}` * Reset primary group membership to user's default group in `/etc/passwd`: `newgrp`
dircolors
Output commands to set the LS_COLORS environment variable. Determine format of output: -b, --sh, --bourne-shell output Bourne shell code to set LS_COLORS -c, --csh, --c-shell output C shell code to set LS_COLORS -p, --print-database output defaults --print-ls-colors output fully escaped colors for display --help display this help and exit --version output version information and exit If FILE is specified, read it to determine which colors to use for which file types and extensions. Otherwise, a precompiled database is used. For details on the format of these files, run 'dircolors --print-database'.
# dircolors > Output commands to set the LS_COLOR environment variable and style `ls`, > `dir`, etc. More information: > https://www.gnu.org/software/coreutils/dircolors. * Output commands to set LS_COLOR using default colors: `dircolors` * Output commands to set LS_COLOR using colors from a file: `dircolors {{path/to/file}}` * Output commands for Bourne shell: `dircolors --bourne-shell` * Output commands for C shell: `dircolors --c-shell` * View the default colors for file types and extensions: `dircolors --print-data`
utmpdump
utmpdump is a simple program to dump UTMP and WTMP files in raw format, so they can be examined. utmpdump reads from stdin unless a filename is passed. -f, --follow Output appended data as the file grows. -o, --output file Write command output to file instead of standard output. -r, --reverse Undump, write back edited login information into the utmp or wtmp files. -h, --help Display help text and exit. -V, --version Print version and exit.
# utmpdump > Dump and load btmp, utmp and wtmp accounting files. More information: > https://manned.org/utmpdump. * Dump the `/var/log/wtmp` file to `stdout` as plain text: `utmpdump {{/var/log/wtmp}}` * Load a previously dumped file into `/var/log/wtmp`: `utmpdump -r {{dumpfile}} > {{/var/log/wtmp}}`
lp
lp submits files for printing or alters a pending job. Use a filename of "-" to force printing from the standard input. THE DEFAULT DESTINATION CUPS provides many ways to set the default destination. The LPDEST and PRINTER environment variables are consulted first. If neither are set, the current default set using the lpoptions(1) command is used, followed by the default set using the lpadmin(8) command. The following options are recognized by lp: -- Marks the end of options; use this to print a file whose name begins with a dash (-). -E Forces encryption when connecting to the server. -U username Specifies the username to use when connecting to the server. -c This option is provided for backwards-compatibility only. On systems that support it, this option forces the print file to be copied to the spool directory before printing. In CUPS, print files are always sent to the scheduler via IPP which has the same effect. -d destination Prints files to the named printer. -h hostname[:port] Chooses an alternate server. -i job-id Specifies an existing job to modify. -m Sends an email when the job is completed. -n copies Sets the number of copies to print. -o "name=value [ ... name=value ]" Sets one or more job options. See "COMMON JOB OPTIONS" below. -q priority Sets the job priority from 1 (lowest) to 100 (highest). The default priority is 50. -s Do not report the resulting job IDs (silent mode.) -t "name" Sets the job name. -H hh:mm -H hold -H immediate -H restart -H resume Specifies when the job should be printed. A value of immediate will print the file immediately, a value of hold will hold the job indefinitely, and a UTC time value (HH:MM) will hold the job until the specified UTC (not local) time. Use a value of resume with the -i option to resume a held job. Use a value of restart with the -i option to restart a completed job. -P page-list Specifies which pages to print in the document. The list can contain a list of numbers and ranges (#-#) separated by commas, e.g., "1,3-5,16". The page numbers refer to the output pages and not the document's original pages - options like "number-up" can affect the numbering of the pages. COMMON JOB OPTIONS Aside from the printer-specific options reported by the lpoptions(1) command, the following generic options are available: -o job-sheets=name Prints a cover page (banner) with the document. The "name" can be "classified", "confidential", "secret", "standard", "topsecret", or "unclassified". -o media=size Sets the page size to size. Most printers support at least the size names "a4", "letter", and "legal". -o number-up={2|4|6|9|16} Prints 2, 4, 6, 9, or 16 document (input) pages on each output page. -o orientation-requested=4 Prints the job in landscape (rotated 90 degrees counter- clockwise). -o orientation-requested=5 Prints the job in landscape (rotated 90 degrees clockwise). -o orientation-requested=6 Prints the job in reverse portrait (rotated 180 degrees). -o print-quality=3 -o print-quality=4 -o print-quality=5 Specifies the output quality - draft (3), normal (4), or best (5). -o sides=one-sided Prints on one side of the paper. -o sides=two-sided-long-edge Prints on both sides of the paper for portrait output. -o sides=two-sided-short-edge Prints on both sides of the paper for landscape output.
# lp > Print files. More information: https://manned.org/lp. * Print the output of a command to the default printer (see `lpstat` command): `echo "test" | lp` * Print a file to the default printer: `lp {{path/to/filename}}` * Print a file to a named printer (see `lpstat` command): `lp -d {{printer_name}} {{path/to/filename}}` * Print N copies of file to default printer (replace N with desired number of copies): `lp -n {{N}} {{path/to/filename}}` * Print only certain pages to the default printer (print pages 1, 3-5, and 16): `lp -P 1,3-5,16 {{path/to/filename}}` * Resume printing a job: `lp -i {{job_id}} -H resume`
git-verify-tag
Validates the gpg signature created by git tag. --raw Print the raw gpg status output to standard error instead of the normal human-readable output. -v, --verbose Print the contents of the tag object before validating it. <tag>... SHA-1 identifiers of Git tag objects.
# git verify-tag > Check for GPG verification of tags. If a tag wasn't signed, an error will > occur. More information: https://git-scm.com/docs/git-verify-tag. * Check tags for a GPG signature: `git verify-tag {{tag1 optional_tag2 ...}}` * Check tags for a GPG signature and show details for each tag: `git verify-tag {{tag1 optional_tag2 ...}} --verbose` * Check tags for a GPG signature and print the raw details: `git verify-tag {{tag1 optional_tag2 ...}} --raw`
du
By default, the du utility shall write to standard output the size of the file space allocated to, and the size of the file space allocated to each subdirectory of, the file hierarchy rooted in each of the specified files. By default, when a symbolic link is encountered on the command line or in the file hierarchy, du shall count the size of the symbolic link (rather than the file referenced by the link), and shall not follow the link to another portion of the file hierarchy. The size of the file space allocated to a file of type directory shall be defined as the sum total of space allocated to all files in the file hierarchy rooted in the directory plus the space allocated to the directory itself. When du cannot stat() files or stat() or read directories, it shall report an error condition and the final exit status is affected. A file that occurs multiple times under one file operand and that has a link count greater than 1 shall be counted and written for only one entry. It is implementation-defined whether a file that has a link count no greater than 1 is counted and written just once, or is counted and written for each occurrence. It is implementation-defined whether a file that occurs under one file operand is counted for other file operands. The directory entry that is selected in the report is unspecified. By default, file sizes shall be written in 512-byte units, rounded up to the next 512-byte unit. The du utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following options shall be supported: -a In addition to the default output, report the size of each file not of type directory in the file hierarchy rooted in the specified file. The -a option shall not affect whether non-directories given as file operands are listed. -H If a symbolic link is specified on the command line, du shall count the size of the file or file hierarchy referenced by the link. -k Write the files sizes in units of 1024 bytes, rather than the default 512-byte units. -L If a symbolic link is specified on the command line or encountered during the traversal of a file hierarchy, du shall count the size of the file or file hierarchy referenced by the link. -s Instead of the default output, report only the total sum for each of the specified files. -x When evaluating file sizes, evaluate only those files that have the same device as the file specified by the file operand. Specifying more than one of the mutually-exclusive options -H and -L shall not be considered an error. The last option specified shall determine the behavior of the utility.
# du > Disk usage: estimate and summarize file and directory space usage. More > information: https://ss64.com/osx/du.html. * List the sizes of a directory and any subdirectories, in the given unit (KiB/MiB/GiB): `du -{{k|m|g}} {{path/to/directory}}` * List the sizes of a directory and any subdirectories, in human-readable form (i.e. auto-selecting the appropriate unit for each size): `du -h {{path/to/directory}}` * Show the size of a single directory, in human-readable units: `du -sh {{path/to/directory}}` * List the human-readable sizes of a directory and of all the files and directories within it: `du -ah {{path/to/directory}}` * List the human-readable sizes of a directory and any subdirectories, up to N levels deep: `du -h -d {{2}} {{path/to/directory}}` * List the human-readable size of all `.jpg` files in subdirectories of the current directory, and show a cumulative total at the end: `du -ch {{*/*.jpg}}`
pgrep
pgrep looks through the currently running processes and lists the process IDs which match the selection criteria to stdout. All the criteria have to match. For example, $ pgrep -u root sshd will only list the processes called sshd AND owned by root. On the other hand, $ pgrep -u root,daemon will list the processes owned by root OR daemon. pkill will send the specified signal (by default SIGTERM) to each process instead of listing them on stdout. pidwait will wait for each process instead of listing them on stdout. -signal --signal signal Defines the signal to send to each matched process. Either the numeric or the symbolic signal name can be used. In pgrep or pidwait mode only the long option can be used and has no effect unless used in conjunction with --require-handler to filter to processes with a userspace signal handler present for a particular signal. -c, --count Suppress normal output; instead print a count of matching processes. When count does not match anything, e.g. returns zero, the command will return non-zero value. Note that for pkill and pidwait, the count is the number of matching processes, not the processes that were successfully signaled or waited for. -d, --delimiter delimiter Sets the string used to delimit each process ID in the output (by default a newline). (pgrep only.) -e, --echo Display name and PID of the process being killed. (pkill only.) -f, --full The pattern is normally only matched against the process name. When -f is set, the full command line is used. -g, --pgroup pgrp,... Only match processes in the process group IDs listed. Process group 0 is translated into pgrep's, pkill's, or pidwait's own process group. -G, --group gid,... Only match processes whose real group ID is listed. Either the numerical or symbolical value may be used. -i, --ignore-case Match processes case-insensitively. -l, --list-name List the process name as well as the process ID. (pgrep only.) -a, --list-full List the full command line as well as the process ID. (pgrep only.) -n, --newest Select only the newest (most recently started) of the matching processes. -o, --oldest Select only the oldest (least recently started) of the matching processes. -O, --older secs Select processes older than secs. -P, --parent ppid,... Only match processes whose parent process ID is listed. -s, --session sid,... Only match processes whose process session ID is listed. Session ID 0 is translated into pgrep's, pkill's, or pidwait's own session ID. -t, --terminal term,... Only match processes whose controlling terminal is listed. The terminal name should be specified without the "/dev/" prefix. -u, --euid euid,... Only match processes whose effective user ID is listed. Either the numerical or symbolical value may be used. -U, --uid uid,... Only match processes whose real user ID is listed. Either the numerical or symbolical value may be used. -v, --inverse Negates the matching. This option is usually used in pgrep's or pidwait's context. In pkill's context the short option is disabled to avoid accidental usage of the option. -w, --lightweight Shows all thread ids instead of pids in pgrep's or pidwait's context. In pkill's context this option is disabled. -x, --exact Only match processes whose names (or command lines if -f is specified) exactly match the pattern. -F, --pidfile file Read PIDs from file. This option is more useful for pkill or pidwait than pgrep. -L, --logpidfile Fail if pidfile (see -F) not locked. -r, --runstates D,R,S,Z,... Match only processes which match the process state. -A, --ignore-ancestors Ignore all ancestors of pgrep, pkill, or pidwait. For example, this can be useful when elevating with sudo or similar tools. -H, --require-handler Only match processes with a userspace signal handler present for the signal to be sent. --cgroup name,... Match on provided control group (cgroup) v2 name. See cgroups(8) --ns pid Match processes that belong to the same namespaces. Required to run as root to match processes from other users. See --nslist for how to limit which namespaces to match. --nslist name,... Match only the provided namespaces. Available namespaces: ipc, mnt, net, pid, user, uts. -q, --queue value Use sigqueue(3) rather than kill(2) and the value argument is used to specify an integer to be sent with the signal. If the receiving process has installed a handler for this signal using the SA_SIGINFO flag to sigaction(2), then it can obtain this data via the si_value field of the siginfo_t structure. -V, --version Display version information and exit. -h, --help Display help and exit.
# pgrep > Find or signal processes by name. More information: > https://www.man7.org/linux/man-pages/man1/pkill.1.html. * Return PIDs of any running processes with a matching command string: `pgrep {{process_name}}` * Search for processes including their command-line options: `pgrep --full "{{process_name}} {{parameter}}"` * Search for processes run by a specific user: `pgrep --euid root {{process_name}}`
bc
The bc utility shall implement an arbitrary precision calculator. It shall take input from any files given, then read from the standard input. If the standard input and standard output to bc are attached to a terminal, the invocation of bc shall be considered to be interactive, causing behavioral constraints described in the following sections. The bc utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following option shall be supported: -l (The letter ell.) Define the math functions and initialize scale to 20, instead of the default zero; see the EXTENDED DESCRIPTION section.
# bc > An arbitrary precision calculator language. See also: `dc`. More > information: https://manned.org/man/freebsd-13.0/bc.1. * Start an interactive session: `bc` * Start an interactive session with the standard math library enabled: `bc --mathlib` * Calculate an expression: `bc --expression='{{5 / 3}}'` * Execute a script: `bc {{path/to/script.bc}}` * Calculate an expression with the specified scale: `bc --expression='scale = {{10}}; {{5 / 3}}'` * Calculate a sine/cosine/arctangent/natural logarithm/exponential function using `mathlib`: `bc --mathlib --expression='{{s|c|a|l|e}}({{1}})'`
git-credential-cache
This command caches credentials for use by future Git programs. The stored credentials are kept in memory of the cache-daemon process (instead of written to a file) and are forgotten after a configurable timeout. Credentials are forgotten sooner if the cache-daemon dies, for example if the system restarts. The cache is accessible over a Unix domain socket, restricted to the current user by filesystem permissions. You probably don’t want to invoke this command directly; it is meant to be used as a credential helper by other parts of Git. See gitcredentials(7) or EXAMPLES below. --timeout <seconds> Number of seconds to cache credentials (default: 900). --socket <path> Use <path> to contact a running cache daemon (or start a new cache daemon if one is not started). Defaults to $XDG_CACHE_HOME/git/credential/socket unless ~/.git-credential-cache/ exists in which case ~/.git-credential-cache/socket is used instead. If your home directory is on a network-mounted filesystem, you may need to change this to a local filesystem. You must specify an absolute path.
# git credential-cache > Git helper to temporarily store passwords in memory. More information: > https://git-scm.com/docs/git-credential-cache. * Store Git credentials for a specific amount of time: `git config credential.helper 'cache --timeout={{time_in_seconds}}'`
git-log
Shows the commit logs. List commits that are reachable by following the parent links from the given commit(s), but exclude commits that are reachable from the one(s) given with a ^ in front of them. The output is given in reverse chronological order by default. You can think of this as a set operation. Commits reachable from any of the commits given on the command line form a set, and then commits reachable from any of the ones given with ^ in front are subtracted from that set. The remaining commits are what comes out in the command’s output. Various other options and paths parameters can be used to further limit the result. Thus, the following command: $ git log foo bar ^baz means "list all the commits which are reachable from foo or bar, but not from baz". A special notation "<commit1>..<commit2>" can be used as a short-hand for "^<commit1> <commit2>". For example, either of the following may be used interchangeably: $ git log origin..HEAD $ git log HEAD ^origin Another special notation is "<commit1>...<commit2>" which is useful for merges. The resulting set of commits is the symmetric difference between the two operands. The following two commands are equivalent: $ git log A B --not $(git merge-base --all A B) $ git log A...B The command takes options applicable to the git-rev-list(1) command to control what is shown and how, and options applicable to the git-diff(1) command to control how the changes each commit introduces are shown. --follow Continue listing the history of a file beyond renames (works only for a single file). --no-decorate, --decorate[=short|full|auto|no] Print out the ref names of any commits that are shown. If short is specified, the ref name prefixes refs/heads/, refs/tags/ and refs/remotes/ will not be printed. If full is specified, the full ref name (including prefix) will be printed. If auto is specified, then if the output is going to a terminal, the ref names are shown as if short were given, otherwise no ref names are shown. The option --decorate is short-hand for --decorate=short. Default to configuration value of log.decorate if configured, otherwise, auto. --decorate-refs=<pattern>, --decorate-refs-exclude=<pattern> For each candidate reference, do not use it for decoration if it matches any patterns given to --decorate-refs-exclude or if it doesn’t match any of the patterns given to --decorate-refs. The log.excludeDecoration config option allows excluding refs from the decorations, but an explicit --decorate-refs pattern will override a match in log.excludeDecoration. If none of these options or config settings are given, then references are used as decoration if they match HEAD, refs/heads/, refs/remotes/, refs/stash/, or refs/tags/. --clear-decorations When specified, this option clears all previous --decorate-refs or --decorate-refs-exclude options and relaxes the default decoration filter to include all references. This option is assumed if the config value log.initialDecorationSet is set to all. --source Print out the ref name given on the command line by which each commit was reached. --[no-]mailmap, --[no-]use-mailmap Use mailmap file to map author and committer names and email addresses to canonical real names and email addresses. See git-shortlog(1). --full-diff Without this flag, git log -p <path>... shows commits that touch the specified paths, and diffs about the same specified paths. With this, the full diff is shown for commits that touch the specified paths; this means that "<path>..." limits only commits, and doesn’t limit diff for those commits. Note that this affects all diff-based output types, e.g. those produced by --stat, etc. --log-size Include a line “log size <number>” in the output for each commit, where <number> is the length of that commit’s message in bytes. Intended to speed up tools that read log messages from git log output by allowing them to allocate space in advance. -L<start>,<end>:<file>, -L:<funcname>:<file> Trace the evolution of the line range given by <start>,<end>, or by the function name regex <funcname>, within the <file>. You may not give any pathspec limiters. This is currently limited to a walk starting from a single revision, i.e., you may only give zero or one positive revision arguments, and <start> and <end> (or <funcname>) must exist in the starting revision. You can specify this option more than once. Implies --patch. Patch output can be suppressed using --no-patch, but other diff formats (namely --raw, --numstat, --shortstat, --dirstat, --summary, --name-only, --name-status, --check) are not currently implemented. <start> and <end> can take one of these forms: • number If <start> or <end> is a number, it specifies an absolute line number (lines count from 1). • /regex/ This form will use the first line matching the given POSIX regex. If <start> is a regex, it will search from the end of the previous -L range, if any, otherwise from the start of file. If <start> is ^/regex/, it will search from the start of file. If <end> is a regex, it will search starting at the line given by <start>. • +offset or -offset This is only valid for <end> and will specify a number of lines before or after the line given by <start>. If :<funcname> is given in place of <start> and <end>, it is a regular expression that denotes the range from the first funcname line that matches <funcname>, up to the next funcname line. :<funcname> searches from the end of the previous -L range, if any, otherwise from the start of file. ^:<funcname> searches from the start of file. The function names are determined in the same way as git diff works out patch hunk headers (see Defining a custom hunk-header in gitattributes(5)). <revision-range> Show only commits in the specified revision range. When no <revision-range> is specified, it defaults to HEAD (i.e. the whole history leading to the current commit). origin..HEAD specifies all the commits reachable from the current commit (i.e. HEAD), but not from origin. For a complete list of ways to spell <revision-range>, see the Specifying Ranges section of gitrevisions(7). [--] <path>... Show only commits that are enough to explain how the files that match the specified paths came to be. See History Simplification below for details and other simplification modes. Paths may need to be prefixed with -- to separate them from options or the revision range, when confusion arises. Commit Limiting Besides specifying a range of commits that should be listed using the special notations explained in the description, additional commit limiting may be applied. Using more options generally further limits the output (e.g. --since=<date1> limits to commits newer than <date1>, and using it with --grep=<pattern> further limits to commits whose log message has a line that matches <pattern>), unless otherwise noted. Note that these are applied before commit ordering and formatting options, such as --reverse. -<number>, -n <number>, --max-count=<number> Limit the number of commits to output. --skip=<number> Skip number commits before starting to show the commit output. --since=<date>, --after=<date> Show commits more recent than a specific date. --since-as-filter=<date> Show all commits more recent than a specific date. This visits all commits in the range, rather than stopping at the first commit which is older than a specific date. --until=<date>, --before=<date> Show commits older than a specific date. --author=<pattern>, --committer=<pattern> Limit the commits output to ones with author/committer header lines that match the specified pattern (regular expression). With more than one --author=<pattern>, commits whose author matches any of the given patterns are chosen (similarly for multiple --committer=<pattern>). --grep-reflog=<pattern> Limit the commits output to ones with reflog entries that match the specified pattern (regular expression). With more than one --grep-reflog, commits whose reflog message matches any of the given patterns are chosen. It is an error to use this option unless --walk-reflogs is in use. --grep=<pattern> Limit the commits output to ones with log message that matches the specified pattern (regular expression). With more than one --grep=<pattern>, commits whose message matches any of the given patterns are chosen (but see --all-match). When --notes is in effect, the message from the notes is matched as if it were part of the log message. --all-match Limit the commits output to ones that match all given --grep, instead of ones that match at least one. --invert-grep Limit the commits output to ones with log message that do not match the pattern specified with --grep=<pattern>. -i, --regexp-ignore-case Match the regular expression limiting patterns without regard to letter case. --basic-regexp Consider the limiting patterns to be basic regular expressions; this is the default. -E, --extended-regexp Consider the limiting patterns to be extended regular expressions instead of the default basic regular expressions. -F, --fixed-strings Consider the limiting patterns to be fixed strings (don’t interpret pattern as a regular expression). -P, --perl-regexp Consider the limiting patterns to be Perl-compatible regular expressions. Support for these types of regular expressions is an optional compile-time dependency. If Git wasn’t compiled with support for them providing this option will cause it to die. --remove-empty Stop when a given path disappears from the tree. --merges Print only merge commits. This is exactly the same as --min-parents=2. --no-merges Do not print commits with more than one parent. This is exactly the same as --max-parents=1. --min-parents=<number>, --max-parents=<number>, --no-min-parents, --no-max-parents Show only commits which have at least (or at most) that many parent commits. In particular, --max-parents=1 is the same as --no-merges, --min-parents=2 is the same as --merges. --max-parents=0 gives all root commits and --min-parents=3 all octopus merges. --no-min-parents and --no-max-parents reset these limits (to no limit) again. Equivalent forms are --min-parents=0 (any commit has 0 or more parents) and --max-parents=-1 (negative numbers denote no upper limit). --first-parent When finding commits to include, follow only the first parent commit upon seeing a merge commit. This option can give a better overview when viewing the evolution of a particular topic branch, because merges into a topic branch tend to be only about adjusting to updated upstream from time to time, and this option allows you to ignore the individual commits brought in to your history by such a merge. This option also changes default diff format for merge commits to first-parent, see --diff-merges=first-parent for details. --exclude-first-parent-only When finding commits to exclude (with a ^), follow only the first parent commit upon seeing a merge commit. This can be used to find the set of changes in a topic branch from the point where it diverged from the remote branch, given that arbitrary merges can be valid topic branch changes. --not Reverses the meaning of the ^ prefix (or lack thereof) for all following revision specifiers, up to the next --not. --all Pretend as if all the refs in refs/, along with HEAD, are listed on the command line as <commit>. --branches[=<pattern>] Pretend as if all the refs in refs/heads are listed on the command line as <commit>. If <pattern> is given, limit branches to ones matching given shell glob. If pattern lacks ?, *, or [, /* at the end is implied. --tags[=<pattern>] Pretend as if all the refs in refs/tags are listed on the command line as <commit>. If <pattern> is given, limit tags to ones matching given shell glob. If pattern lacks ?, *, or [, /* at the end is implied. --remotes[=<pattern>] Pretend as if all the refs in refs/remotes are listed on the command line as <commit>. If <pattern> is given, limit remote-tracking branches to ones matching given shell glob. If pattern lacks ?, *, or [, /* at the end is implied. --glob=<glob-pattern> Pretend as if all the refs matching shell glob <glob-pattern> are listed on the command line as <commit>. Leading refs/, is automatically prepended if missing. If pattern lacks ?, *, or [, /* at the end is implied. --exclude=<glob-pattern> Do not include refs matching <glob-pattern> that the next --all, --branches, --tags, --remotes, or --glob would otherwise consider. Repetitions of this option accumulate exclusion patterns up to the next --all, --branches, --tags, --remotes, or --glob option (other options or arguments do not clear accumulated patterns). The patterns given should not begin with refs/heads, refs/tags, or refs/remotes when applied to --branches, --tags, or --remotes, respectively, and they must begin with refs/ when applied to --glob or --all. If a trailing /* is intended, it must be given explicitly. --exclude-hidden=[fetch|receive|uploadpack] Do not include refs that would be hidden by git-fetch, git-receive-pack or git-upload-pack by consulting the appropriate fetch.hideRefs, receive.hideRefs or uploadpack.hideRefs configuration along with transfer.hideRefs (see git-config(1)). This option affects the next pseudo-ref option --all or --glob and is cleared after processing them. --reflog Pretend as if all objects mentioned by reflogs are listed on the command line as <commit>. --alternate-refs Pretend as if all objects mentioned as ref tips of alternate repositories were listed on the command line. An alternate repository is any repository whose object directory is specified in objects/info/alternates. The set of included objects may be modified by core.alternateRefsCommand, etc. See git-config(1). --single-worktree By default, all working trees will be examined by the following options when there are more than one (see git-worktree(1)): --all, --reflog and --indexed-objects. This option forces them to examine the current working tree only. --ignore-missing Upon seeing an invalid object name in the input, pretend as if the bad input was not given. --bisect Pretend as if the bad bisection ref refs/bisect/bad was listed and as if it was followed by --not and the good bisection refs refs/bisect/good-* on the command line. --stdin In addition to the <commit> listed on the command line, read them from the standard input. If a -- separator is seen, stop reading commits and start reading paths to limit the result. --cherry-mark Like --cherry-pick (see below) but mark equivalent commits with = rather than omitting them, and inequivalent ones with +. --cherry-pick Omit any commit that introduces the same change as another commit on the “other side” when the set of commits are limited with symmetric difference. For example, if you have two branches, A and B, a usual way to list all commits on only one side of them is with --left-right (see the example below in the description of the --left-right option). However, it shows the commits that were cherry-picked from the other branch (for example, “3rd on b” may be cherry-picked from branch A). With this option, such pairs of commits are excluded from the output. --left-only, --right-only List only commits on the respective side of a symmetric difference, i.e. only those which would be marked < resp. > by --left-right. For example, --cherry-pick --right-only A...B omits those commits from B which are in A or are patch-equivalent to a commit in A. In other words, this lists the + commits from git cherry A B. More precisely, --cherry-pick --right-only --no-merges gives the exact list. --cherry A synonym for --right-only --cherry-mark --no-merges; useful to limit the output to the commits on our side and mark those that have been applied to the other side of a forked history with git log --cherry upstream...mybranch, similar to git cherry upstream mybranch. -g, --walk-reflogs Instead of walking the commit ancestry chain, walk reflog entries from the most recent one to older ones. When this option is used you cannot specify commits to exclude (that is, ^commit, commit1..commit2, and commit1...commit2 notations cannot be used). With --pretty format other than oneline and reference (for obvious reasons), this causes the output to have two extra lines of information taken from the reflog. The reflog designator in the output may be shown as ref@{Nth} (where Nth is the reverse-chronological index in the reflog) or as ref@{timestamp} (with the timestamp for that entry), depending on a few rules: 1. If the starting point is specified as ref@{Nth}, show the index format. 2. If the starting point was specified as ref@{now}, show the timestamp format. 3. If neither was used, but --date was given on the command line, show the timestamp in the format requested by --date. 4. Otherwise, show the index format. Under --pretty=oneline, the commit message is prefixed with this information on the same line. This option cannot be combined with --reverse. See also git-reflog(1). Under --pretty=reference, this information will not be shown at all. --merge After a failed merge, show refs that touch files having a conflict and don’t exist on all heads to merge. --boundary Output excluded boundary commits. Boundary commits are prefixed with -. History Simplification Sometimes you are only interested in parts of the history, for example the commits modifying a particular <path>. But there are two parts of History Simplification, one part is selecting the commits and the other is how to do it, as there are various strategies to simplify the history. The following options select the commits to be shown: <paths> Commits modifying the given <paths> are selected. --simplify-by-decoration Commits that are referred by some branch or tag are selected. Note that extra commits can be shown to give a meaningful history. The following options affect the way the simplification is performed: Default mode Simplifies the history to the simplest history explaining the final state of the tree. Simplest because it prunes some side branches if the end result is the same (i.e. merging branches with the same content) --show-pulls Include all commits from the default mode, but also any merge commits that are not TREESAME to the first parent but are TREESAME to a later parent. This mode is helpful for showing the merge commits that "first introduced" a change to a branch. --full-history Same as the default mode, but does not prune some history. --dense Only the selected commits are shown, plus some to have a meaningful history. --sparse All commits in the simplified history are shown. --simplify-merges Additional option to --full-history to remove some needless merges from the resulting history, as there are no selected commits contributing to this merge. --ancestry-path[=<commit>] When given a range of commits to display (e.g. commit1..commit2 or commit2 ^commit1), only display commits in that range that are ancestors of <commit>, descendants of <commit>, or <commit> itself. If no commit is specified, use commit1 (the excluded part of the range) as <commit>. Can be passed multiple times; if so, a commit is included if it is any of the commits given or if it is an ancestor or descendant of one of them. A more detailed explanation follows. Suppose you specified foo as the <paths>. We shall call commits that modify foo !TREESAME, and the rest TREESAME. (In a diff filtered for foo, they look different and equal, respectively.) In the following, we will always refer to the same example history to illustrate the differences between simplification settings. We assume that you are filtering for a file foo in this commit graph: .-A---M---N---O---P---Q / / / / / / I B C D E Y \ / / / / / `-------------' X The horizontal line of history A---Q is taken to be the first parent of each merge. The commits are: • I is the initial commit, in which foo exists with contents “asdf”, and a file quux exists with contents “quux”. Initial commits are compared to an empty tree, so I is !TREESAME. • In A, foo contains just “foo”. • B contains the same change as A. Its merge M is trivial and hence TREESAME to all parents. • C does not change foo, but its merge N changes it to “foobar”, so it is not TREESAME to any parent. • D sets foo to “baz”. Its merge O combines the strings from N and D to “foobarbaz”; i.e., it is not TREESAME to any parent. • E changes quux to “xyzzy”, and its merge P combines the strings to “quux xyzzy”. P is TREESAME to O, but not to E. • X is an independent root commit that added a new file side, and Y modified it. Y is TREESAME to X. Its merge Q added side to P, and Q is TREESAME to P, but not to Y. rev-list walks backwards through history, including or excluding commits based on whether --full-history and/or parent rewriting (via --parents or --children) are used. The following settings are available. Default mode Commits are included if they are not TREESAME to any parent (though this can be changed, see --sparse below). If the commit was a merge, and it was TREESAME to one parent, follow only that parent. (Even if there are several TREESAME parents, follow only one of them.) Otherwise, follow all parents. This results in: .-A---N---O / / / I---------D Note how the rule to only follow the TREESAME parent, if one is available, removed B from consideration entirely. C was considered via N, but is TREESAME. Root commits are compared to an empty tree, so I is !TREESAME. Parent/child relations are only visible with --parents, but that does not affect the commits selected in default mode, so we have shown the parent lines. --full-history without parent rewriting This mode differs from the default in one point: always follow all parents of a merge, even if it is TREESAME to one of them. Even if more than one side of the merge has commits that are included, this does not imply that the merge itself is! In the example, we get I A B N D O P Q M was excluded because it is TREESAME to both parents. E, C and B were all walked, but only B was !TREESAME, so the others do not appear. Note that without parent rewriting, it is not really possible to talk about the parent/child relationships between the commits, so we show them disconnected. --full-history with parent rewriting Ordinary commits are only included if they are !TREESAME (though this can be changed, see --sparse below). Merges are always included. However, their parent list is rewritten: Along each parent, prune away commits that are not included themselves. This results in .-A---M---N---O---P---Q / / / / / I B / D / \ / / / / `-------------' Compare to --full-history without rewriting above. Note that E was pruned away because it is TREESAME, but the parent list of P was rewritten to contain E's parent I. The same happened for C and N, and X, Y and Q. In addition to the above settings, you can change whether TREESAME affects inclusion: --dense Commits that are walked are included if they are not TREESAME to any parent. --sparse All commits that are walked are included. Note that without --full-history, this still simplifies merges: if one of the parents is TREESAME, we follow only that one, so the other sides of the merge are never walked. --simplify-merges First, build a history graph in the same way that --full-history with parent rewriting does (see above). Then simplify each commit C to its replacement C' in the final history according to the following rules: • Set C' to C. • Replace each parent P of C' with its simplification P'. In the process, drop parents that are ancestors of other parents or that are root commits TREESAME to an empty tree, and remove duplicates, but take care to never drop all parents that we are TREESAME to. • If after this parent rewriting, C' is a root or merge commit (has zero or >1 parents), a boundary commit, or !TREESAME, it remains. Otherwise, it is replaced with its only parent. The effect of this is best shown by way of comparing to --full-history with parent rewriting. The example turns into: .-A---M---N---O / / / I B D \ / / `---------' Note the major differences in N, P, and Q over --full-history: • N's parent list had I removed, because it is an ancestor of the other parent M. Still, N remained because it is !TREESAME. • P's parent list similarly had I removed. P was then removed completely, because it had one parent and is TREESAME. • Q's parent list had Y simplified to X. X was then removed, because it was a TREESAME root. Q was then removed completely, because it had one parent and is TREESAME. There is another simplification mode available: --ancestry-path[=<commit>] Limit the displayed commits to those which are an ancestor of <commit>, or which are a descendant of <commit>, or are <commit> itself. As an example use case, consider the following commit history: D---E-------F / \ \ B---C---G---H---I---J / \ A-------K---------------L--M A regular D..M computes the set of commits that are ancestors of M, but excludes the ones that are ancestors of D. This is useful to see what happened to the history leading to M since D, in the sense that “what does M have that did not exist in D”. The result in this example would be all the commits, except A and B (and D itself, of course). When we want to find out what commits in M are contaminated with the bug introduced by D and need fixing, however, we might want to view only the subset of D..M that are actually descendants of D, i.e. excluding C and K. This is exactly what the --ancestry-path option does. Applied to the D..M range, it results in: E-------F \ \ G---H---I---J \ L--M We can also use --ancestry-path=D instead of --ancestry-path which means the same thing when applied to the D..M range but is just more explicit. If we instead are interested in a given topic within this range, and all commits affected by that topic, we may only want to view the subset of D..M which contain that topic in their ancestry path. So, using --ancestry-path=H D..M for example would result in: E \ G---H---I---J \ L--M Whereas --ancestry-path=K D..M would result in K---------------L--M Before discussing another option, --show-pulls, we need to create a new example history. A common problem users face when looking at simplified history is that a commit they know changed a file somehow does not appear in the file’s simplified history. Let’s demonstrate a new example and show how options such as --full-history and --simplify-merges works in that case: .-A---M-----C--N---O---P / / \ \ \/ / / I B \ R-'`-Z' / \ / \/ / \ / /\ / `---X--' `---Y--' For this example, suppose I created file.txt which was modified by A, B, and X in different ways. The single-parent commits C, Z, and Y do not change file.txt. The merge commit M was created by resolving the merge conflict to include both changes from A and B and hence is not TREESAME to either. The merge commit R, however, was created by ignoring the contents of file.txt at M and taking only the contents of file.txt at X. Hence, R is TREESAME to X but not M. Finally, the natural merge resolution to create N is to take the contents of file.txt at R, so N is TREESAME to R but not C. The merge commits O and P are TREESAME to their first parents, but not to their second parents, Z and Y respectively. When using the default mode, N and R both have a TREESAME parent, so those edges are walked and the others are ignored. The resulting history graph is: I---X When using --full-history, Git walks every edge. This will discover the commits A and B and the merge M, but also will reveal the merge commits O and P. With parent rewriting, the resulting graph is: .-A---M--------N---O---P / / \ \ \/ / / I B \ R-'`--' / \ / \/ / \ / /\ / `---X--' `------' Here, the merge commits O and P contribute extra noise, as they did not actually contribute a change to file.txt. They only merged a topic that was based on an older version of file.txt. This is a common issue in repositories using a workflow where many contributors work in parallel and merge their topic branches along a single trunk: many unrelated merges appear in the --full-history results. When using the --simplify-merges option, the commits O and P disappear from the results. This is because the rewritten second parents of O and P are reachable from their first parents. Those edges are removed and then the commits look like single-parent commits that are TREESAME to their parent. This also happens to the commit N, resulting in a history view as follows: .-A---M--. / / \ I B R \ / / \ / / `---X--' In this view, we see all of the important single-parent changes from A, B, and X. We also see the carefully-resolved merge M and the not-so-carefully-resolved merge R. This is usually enough information to determine why the commits A and B "disappeared" from history in the default view. However, there are a few issues with this approach. The first issue is performance. Unlike any previous option, the --simplify-merges option requires walking the entire commit history before returning a single result. This can make the option difficult to use for very large repositories. The second issue is one of auditing. When many contributors are working on the same repository, it is important which merge commits introduced a change into an important branch. The problematic merge R above is not likely to be the merge commit that was used to merge into an important branch. Instead, the merge N was used to merge R and X into the important branch. This commit may have information about why the change X came to override the changes from A and B in its commit message. --show-pulls In addition to the commits shown in the default history, show each merge commit that is not TREESAME to its first parent but is TREESAME to a later parent. When a merge commit is included by --show-pulls, the merge is treated as if it "pulled" the change from another branch. When using --show-pulls on this example (and no other options) the resulting graph is: I---X---R---N Here, the merge commits R and N are included because they pulled the commits X and R into the base branch, respectively. These merges are the reason the commits A and B do not appear in the default history. When --show-pulls is paired with --simplify-merges, the graph includes all of the necessary information: .-A---M--. N / / \ / I B R \ / / \ / / `---X--' Notice that since M is reachable from R, the edge from N to M was simplified away. However, N still appears in the history as an important commit because it "pulled" the change R into the main branch. The --simplify-by-decoration option allows you to view only the big picture of the topology of the history, by omitting commits that are not referenced by tags. Commits are marked as !TREESAME (in other words, kept after history simplification rules described above) if (1) they are referenced by tags, or (2) they change the contents of the paths given on the command line. All other commits are marked as TREESAME (subject to be simplified away). Commit Ordering By default, the commits are shown in reverse chronological order. --date-order Show no parents before all of its children are shown, but otherwise show commits in the commit timestamp order. --author-date-order Show no parents before all of its children are shown, but otherwise show commits in the author timestamp order. --topo-order Show no parents before all of its children are shown, and avoid showing commits on multiple lines of history intermixed. For example, in a commit history like this: ---1----2----4----7 \ \ 3----5----6----8--- where the numbers denote the order of commit timestamps, git rev-list and friends with --date-order show the commits in the timestamp order: 8 7 6 5 4 3 2 1. With --topo-order, they would show 8 6 5 3 7 4 2 1 (or 8 7 4 2 6 5 3 1); some older commits are shown before newer ones in order to avoid showing the commits from two parallel development track mixed together. --reverse Output the commits chosen to be shown (see Commit Limiting section above) in reverse order. Cannot be combined with --walk-reflogs. Object Traversal These options are mostly targeted for packing of Git repositories. --no-walk[=(sorted|unsorted)] Only show the given commits, but do not traverse their ancestors. This has no effect if a range is specified. If the argument unsorted is given, the commits are shown in the order they were given on the command line. Otherwise (if sorted or no argument was given), the commits are shown in reverse chronological order by commit time. Cannot be combined with --graph. --do-walk Overrides a previous --no-walk. Commit Formatting --pretty[=<format>], --format=<format> Pretty-print the contents of the commit logs in a given format, where <format> can be one of oneline, short, medium, full, fuller, reference, email, raw, format:<string> and tformat:<string>. When <format> is none of the above, and has %placeholder in it, it acts as if --pretty=tformat:<format> were given. See the "PRETTY FORMATS" section for some additional details for each format. When =<format> part is omitted, it defaults to medium. Note: you can specify the default pretty format in the repository configuration (see git-config(1)). --abbrev-commit Instead of showing the full 40-byte hexadecimal commit object name, show a prefix that names the object uniquely. "--abbrev=<n>" (which also modifies diff output, if it is displayed) option can be used to specify the minimum length of the prefix. This should make "--pretty=oneline" a whole lot more readable for people using 80-column terminals. --no-abbrev-commit Show the full 40-byte hexadecimal commit object name. This negates --abbrev-commit, either explicit or implied by other options such as "--oneline". It also overrides the log.abbrevCommit variable. --oneline This is a shorthand for "--pretty=oneline --abbrev-commit" used together. --encoding=<encoding> Commit objects record the character encoding used for the log message in their encoding header; this option can be used to tell the command to re-code the commit log message in the encoding preferred by the user. For non plumbing commands this defaults to UTF-8. Note that if an object claims to be encoded in X and we are outputting in X, we will output the object verbatim; this means that invalid sequences in the original commit may be copied to the output. Likewise, if iconv(3) fails to convert the commit, we will quietly output the original object verbatim. --expand-tabs=<n>, --expand-tabs, --no-expand-tabs Perform a tab expansion (replace each tab with enough spaces to fill to the next display column that is multiple of <n>) in the log message before showing it in the output. --expand-tabs is a short-hand for --expand-tabs=8, and --no-expand-tabs is a short-hand for --expand-tabs=0, which disables tab expansion. By default, tabs are expanded in pretty formats that indent the log message by 4 spaces (i.e. medium, which is the default, full, and fuller). --notes[=<ref>] Show the notes (see git-notes(1)) that annotate the commit, when showing the commit log message. This is the default for git log, git show and git whatchanged commands when there is no --pretty, --format, or --oneline option given on the command line. By default, the notes shown are from the notes refs listed in the core.notesRef and notes.displayRef variables (or corresponding environment overrides). See git-config(1) for more details. With an optional <ref> argument, use the ref to find the notes to display. The ref can specify the full refname when it begins with refs/notes/; when it begins with notes/, refs/ and otherwise refs/notes/ is prefixed to form a full name of the ref. Multiple --notes options can be combined to control which notes are being displayed. Examples: "--notes=foo" will show only notes from "refs/notes/foo"; "--notes=foo --notes" will show both notes from "refs/notes/foo" and from the default notes ref(s). --no-notes Do not show notes. This negates the above --notes option, by resetting the list of notes refs from which notes are shown. Options are parsed in the order given on the command line, so e.g. "--notes --notes=foo --no-notes --notes=bar" will only show notes from "refs/notes/bar". --show-notes[=<ref>], --[no-]standard-notes These options are deprecated. Use the above --notes/--no-notes options instead. --show-signature Check the validity of a signed commit object by passing the signature to gpg --verify and show the output. --relative-date Synonym for --date=relative. --date=<format> Only takes effect for dates shown in human-readable format, such as when using --pretty. log.date config variable sets a default value for the log command’s --date option. By default, dates are shown in the original time zone (either committer’s or author’s). If -local is appended to the format (e.g., iso-local), the user’s local time zone is used instead. --date=relative shows dates relative to the current time, e.g. “2 hours ago”. The -local option has no effect for --date=relative. --date=local is an alias for --date=default-local. --date=iso (or --date=iso8601) shows timestamps in a ISO 8601-like format. The differences to the strict ISO 8601 format are: • a space instead of the T date/time delimiter • a space between time and time zone • no colon between hours and minutes of the time zone --date=iso-strict (or --date=iso8601-strict) shows timestamps in strict ISO 8601 format. --date=rfc (or --date=rfc2822) shows timestamps in RFC 2822 format, often found in email messages. --date=short shows only the date, but not the time, in YYYY-MM-DD format. --date=raw shows the date as seconds since the epoch (1970-01-01 00:00:00 UTC), followed by a space, and then the timezone as an offset from UTC (a + or - with four digits; the first two are hours, and the second two are minutes). I.e., as if the timestamp were formatted with strftime("%s %z")). Note that the -local option does not affect the seconds-since-epoch value (which is always measured in UTC), but does switch the accompanying timezone value. --date=human shows the timezone if the timezone does not match the current time-zone, and doesn’t print the whole date if that matches (ie skip printing year for dates that are "this year", but also skip the whole date itself if it’s in the last few days and we can just say what weekday it was). For older dates the hour and minute is also omitted. --date=unix shows the date as a Unix epoch timestamp (seconds since 1970). As with --raw, this is always in UTC and therefore -local has no effect. --date=format:... feeds the format ... to your system strftime, except for %s, %z, and %Z, which are handled internally. Use --date=format:%c to show the date in your system locale’s preferred format. See the strftime manual for a complete list of format placeholders. When using -local, the correct syntax is --date=format-local:.... --date=default is the default format, and is based on ctime(3) output. It shows a single line with three-letter day of the week, three-letter month, day-of-month, hour-minute-seconds in "HH:MM:SS" format, followed by 4-digit year, plus timezone information, unless the local time zone is used, e.g. Thu Jan 1 00:00:00 1970 +0000. --parents Print also the parents of the commit (in the form "commit parent..."). Also enables parent rewriting, see History Simplification above. --children Print also the children of the commit (in the form "commit child..."). Also enables parent rewriting, see History Simplification above. --left-right Mark which side of a symmetric difference a commit is reachable from. Commits from the left side are prefixed with < and those from the right with >. If combined with --boundary, those commits are prefixed with -. For example, if you have this topology: y---b---b branch B / \ / / . / / \ o---x---a---a branch A you would get an output like this: $ git rev-list --left-right --boundary --pretty=oneline A...B >bbbbbbb... 3rd on b >bbbbbbb... 2nd on b <aaaaaaa... 3rd on a <aaaaaaa... 2nd on a -yyyyyyy... 1st on b -xxxxxxx... 1st on a --graph Draw a text-based graphical representation of the commit history on the left hand side of the output. This may cause extra lines to be printed in between commits, in order for the graph history to be drawn properly. Cannot be combined with --no-walk. This enables parent rewriting, see History Simplification above. This implies the --topo-order option by default, but the --date-order option may also be specified. --show-linear-break[=<barrier>] When --graph is not used, all history branches are flattened which can make it hard to see that the two consecutive commits do not belong to a linear branch. This option puts a barrier in between them in that case. If <barrier> is specified, it is the string that will be shown instead of the default one.
# git log > Show a history of commits. More information: https://git-scm.com/docs/git- > log. * Show the sequence of commits starting from the current one, in reverse chronological order of the Git repository in the current working directory: `git log` * Show the history of a particular file or directory, including differences: `git log -p {{path/to/file_or_directory}}` * Show an overview of which file(s) changed in each commit: `git log --stat` * Show a graph of commits in the current branch using only the first line of each commit message: `git log --oneline --graph` * Show a graph of all commits, tags and branches in the entire repo: `git log --oneline --decorate --all --graph` * Show only commits whose messages include a given string (case-insensitively): `git log -i --grep {{search_string}}` * Show the last N commits from a certain author: `git log -n {{number}} --author={{author}}` * Show commits between two dates (yyyy-mm-dd): `git log --before="{{2017-01-29}}" --after="{{2017-01-17}}"`
quota
quota displays users' disk usage and limits. By default only the user quotas are printed. By default space usage and limits are shown in kbytes (and are named blocks for historical reasons). quota reports the quotas of all the filesystems listed in /etc/mtab. For filesystems that are NFS-mounted a call to the rpc.rquotad on the server machine is performed to get the information. -F, --format=format-name Show quota for specified format (ie. don't perform format autodetection). Possible format names are: vfsold Original quota format with 16-bit UIDs / GIDs, vfsv0 Quota format with 32-bit UIDs / GIDs, 64-bit space usage, 32-bit inode usage and limits, vfsv1 Quota format with 64-bit quota limits and usage, rpc (quota over NFS), xfs (quota on XFS filesystem) -g, --group Print group quotas for the group of which the user is a member. The optional group argument(s) restricts the display to the specified group(s). -u, --user flag is equivalent to the default. -P, --project Print project quotas for the specified project. -v, --verbose will display quotas on filesystems where no storage is allocated. -s, --human-readable[=units] option will make quota(1) try to choose units for showing limits, used space and used inodes. Units can be also specified explicitely by an optional argument in format [ kgt ],[ kgt ] where the first character specifies space units and the second character specifies inode units. --always-resolve Always try to translate user / group name to uid / gid even if the name is composed of digits only. -p, --raw-grace When user is in grace period, report time in seconds since epoch when his grace time runs out (or has run out). Field is '0' when no grace time is in effect. This is especially useful when parsing output by a script. -i, --no-autofs ignore mountpoints mounted by automounter -l, --local-only report quotas only on local filesystems (ie. ignore NFS mounted filesystems). -A, --all-nfs report quotas for all NFS filesystems even if they report to be on the same device. -f, --filesystem-list report quotas only for filesystems specified on command line. --filesystem=path report quotas only for filesystem path. This option can be specified multiple types and quota will be reported for each specified filesystem. Unlike command line option -f remaining command like arguments are still treated as user / group / project names to report. -m, --no-mixed-pathnames Currently, pathnames of NFSv4 mountpoints are sent without leading slash in the path. rpc.rquotad uses this to recognize NFSv4 mounts and properly prepend pseudoroot of NFS filesystem to the path. If you specify this option, quota will always send paths with a leading slash. This can be useful for legacy reasons but be aware that quota over RPC will stop working if you are using new rpc.rquotad. -q, --quiet Print a more terse message, containing only information on filesystems where usage is over quota. -Q, --quiet-refuse Do not print error message if connection to rpc.rquotad is refused (usually this happens when rpc.rquotad is not running on the server). -w, --no-wrap Do not wrap the line if the device name is too long. This can be useful when parsing the output of quota(1) by a script. --show-mntpoint Show also mount point as a filesystem identification. --hide-device Do not show device name in a filesystem identification. Specifying both -g and -u displays both the user quotas and the group quotas (for the user). Only the super-user may use the -u flag and the optional user argument to view the limits of other users. Also viewing of project quota usage and limits is limited to super-user only. Non-super-users can use the -g flag and optional group argument to view only the limits of groups of which they are members. The -q flag takes precedence over the -v flag.
# quota > Display users' disk space usage and allocated limits. More information: > https://manned.org/quota. * Show disk quotas in human-readable units for the current user: `quota -s` * Verbose output (also display quotas on filesystems where no storage is allocated): `quota -v` * Quiet output (only display quotas on filesystems where usage is over quota): `quota -q` * Print quotas for the groups of which the current user is a member: `quota -g` * Show disk quotas for another user: `sudo quota -u {{username}}`
git-format-patch
Prepare each non-merge commit with its "patch" in one "message" per commit, formatted to resemble a UNIX mailbox. The output of this command is convenient for e-mail submission or for use with git am. A "message" generated by the command consists of three parts: • A brief metadata header that begins with From <commit> with a fixed Mon Sep 17 00:00:00 2001 datestamp to help programs like "file(1)" to recognize that the file is an output from this command, fields that record the author identity, the author date, and the title of the change (taken from the first paragraph of the commit log message). • The second and subsequent paragraphs of the commit log message. • The "patch", which is the "diff -p --stat" output (see git-diff(1)) between the commit and its parent. The log message and the patch is separated by a line with a three-dash line. There are two ways to specify which commits to operate on. 1. A single commit, <since>, specifies that the commits leading to the tip of the current branch that are not in the history that leads to the <since> to be output. 2. Generic <revision range> expression (see "SPECIFYING REVISIONS" section in gitrevisions(7)) means the commits in the specified range. The first rule takes precedence in the case of a single <commit>. To apply the second rule, i.e., format everything since the beginning of history up until <commit>, use the --root option: git format-patch --root <commit>. If you want to format only <commit> itself, you can do this with git format-patch -1 <commit>. By default, each output file is numbered sequentially from 1, and uses the first line of the commit message (massaged for pathname safety) as the filename. With the --numbered-files option, the output file names will only be numbers, without the first line of the commit appended. The names of the output files are printed to standard output, unless the --stdout option is specified. If -o is specified, output files are created in <dir>. Otherwise they are created in the current working directory. The default path can be set with the format.outputDirectory configuration option. The -o option takes precedence over format.outputDirectory. To store patches in the current working directory even when format.outputDirectory points elsewhere, use -o .. All directory components will be created. By default, the subject of a single patch is "[PATCH] " followed by the concatenation of lines from the commit message up to the first blank line (see the DISCUSSION section of git-commit(1)). When multiple patches are output, the subject prefix will instead be "[PATCH n/m] ". To force 1/1 to be added for a single patch, use -n. To omit patch numbers from the subject, use -N. If given --thread, git-format-patch will generate In-Reply-To and References headers to make the second and subsequent patch mails appear as replies to the first mail; this also generates a Message-ID header to reference. -p, --no-stat Generate plain patches without any diffstats. -U<n>, --unified=<n> Generate diffs with <n> lines of context instead of the usual three. --output=<file> Output to a specific file instead of stdout. --output-indicator-new=<char>, --output-indicator-old=<char>, --output-indicator-context=<char> Specify the character used to indicate new, old or context lines in the generated patch. Normally they are +, - and ' ' respectively. --indent-heuristic Enable the heuristic that shifts diff hunk boundaries to make patches easier to read. This is the default. --no-indent-heuristic Disable the indent heuristic. --minimal Spend extra time to make sure the smallest possible diff is produced. --patience Generate a diff using the "patience diff" algorithm. --histogram Generate a diff using the "histogram diff" algorithm. --anchored=<text> Generate a diff using the "anchored diff" algorithm. This option may be specified more than once. If a line exists in both the source and destination, exists only once, and starts with this text, this algorithm attempts to prevent it from appearing as a deletion or addition in the output. It uses the "patience diff" algorithm internally. --diff-algorithm={patience|minimal|histogram|myers} Choose a diff algorithm. The variants are as follows: default, myers The basic greedy diff algorithm. Currently, this is the default. minimal Spend extra time to make sure the smallest possible diff is produced. patience Use "patience diff" algorithm when generating patches. histogram This algorithm extends the patience algorithm to "support low-occurrence common elements". For instance, if you configured the diff.algorithm variable to a non-default value and want to use the default one, then you have to use --diff-algorithm=default option. --stat[=<width>[,<name-width>[,<count>]]] Generate a diffstat. By default, as much space as necessary will be used for the filename part, and the rest for the graph part. Maximum width defaults to terminal width, or 80 columns if not connected to a terminal, and can be overridden by <width>. The width of the filename part can be limited by giving another width <name-width> after a comma. The width of the graph part can be limited by using --stat-graph-width=<width> (affects all commands generating a stat graph) or by setting diff.statGraphWidth=<width> (does not affect git format-patch). By giving a third parameter <count>, you can limit the output to the first <count> lines, followed by ... if there are more. These parameters can also be set individually with --stat-width=<width>, --stat-name-width=<name-width> and --stat-count=<count>. --compact-summary Output a condensed summary of extended header information such as file creations or deletions ("new" or "gone", optionally "+l" if it’s a symlink) and mode changes ("+x" or "-x" for adding or removing executable bit respectively) in diffstat. The information is put between the filename part and the graph part. Implies --stat. --numstat Similar to --stat, but shows number of added and deleted lines in decimal notation and pathname without abbreviation, to make it more machine friendly. For binary files, outputs two - instead of saying 0 0. --shortstat Output only the last line of the --stat format containing total number of modified files, as well as number of added and deleted lines. -X[<param1,param2,...>], --dirstat[=<param1,param2,...>] Output the distribution of relative amount of changes for each sub-directory. The behavior of --dirstat can be customized by passing it a comma separated list of parameters. The defaults are controlled by the diff.dirstat configuration variable (see git-config(1)). The following parameters are available: changes Compute the dirstat numbers by counting the lines that have been removed from the source, or added to the destination. This ignores the amount of pure code movements within a file. In other words, rearranging lines in a file is not counted as much as other changes. This is the default behavior when no parameter is given. lines Compute the dirstat numbers by doing the regular line-based diff analysis, and summing the removed/added line counts. (For binary files, count 64-byte chunks instead, since binary files have no natural concept of lines). This is a more expensive --dirstat behavior than the changes behavior, but it does count rearranged lines within a file as much as other changes. The resulting output is consistent with what you get from the other --*stat options. files Compute the dirstat numbers by counting the number of files changed. Each changed file counts equally in the dirstat analysis. This is the computationally cheapest --dirstat behavior, since it does not have to look at the file contents at all. cumulative Count changes in a child directory for the parent directory as well. Note that when using cumulative, the sum of the percentages reported may exceed 100%. The default (non-cumulative) behavior can be specified with the noncumulative parameter. <limit> An integer parameter specifies a cut-off percent (3% by default). Directories contributing less than this percentage of the changes are not shown in the output. Example: The following will count changed files, while ignoring directories with less than 10% of the total amount of changed files, and accumulating child directory counts in the parent directories: --dirstat=files,10,cumulative. --cumulative Synonym for --dirstat=cumulative --dirstat-by-file[=<param1,param2>...] Synonym for --dirstat=files,param1,param2... --summary Output a condensed summary of extended header information such as creations, renames and mode changes. --no-renames Turn off rename detection, even when the configuration file gives the default to do so. --[no-]rename-empty Whether to use empty blobs as rename source. --full-index Instead of the first handful of characters, show the full pre- and post-image blob object names on the "index" line when generating patch format output. --binary In addition to --full-index, output a binary diff that can be applied with git-apply. --abbrev[=<n>] Instead of showing the full 40-byte hexadecimal object name in diff-raw format output and diff-tree header lines, show the shortest prefix that is at least <n> hexdigits long that uniquely refers the object. In diff-patch output format, --full-index takes higher precedence, i.e. if --full-index is specified, full blob names will be shown regardless of --abbrev. Non default number of digits can be specified with --abbrev=<n>. -B[<n>][/<m>], --break-rewrites[=[<n>][/<m>]] Break complete rewrite changes into pairs of delete and create. This serves two purposes: It affects the way a change that amounts to a total rewrite of a file not as a series of deletion and insertion mixed together with a very few lines that happen to match textually as the context, but as a single deletion of everything old followed by a single insertion of everything new, and the number m controls this aspect of the -B option (defaults to 60%). -B/70% specifies that less than 30% of the original should remain in the result for Git to consider it a total rewrite (i.e. otherwise the resulting patch will be a series of deletion and insertion mixed together with context lines). When used with -M, a totally-rewritten file is also considered as the source of a rename (usually -M only considers a file that disappeared as the source of a rename), and the number n controls this aspect of the -B option (defaults to 50%). -B20% specifies that a change with addition and deletion compared to 20% or more of the file’s size are eligible for being picked up as a possible source of a rename to another file. -M[<n>], --find-renames[=<n>] Detect renames. If n is specified, it is a threshold on the similarity index (i.e. amount of addition/deletions compared to the file’s size). For example, -M90% means Git should consider a delete/add pair to be a rename if more than 90% of the file hasn’t changed. Without a % sign, the number is to be read as a fraction, with a decimal point before it. I.e., -M5 becomes 0.5, and is thus the same as -M50%. Similarly, -M05 is the same as -M5%. To limit detection to exact renames, use -M100%. The default similarity index is 50%. -C[<n>], --find-copies[=<n>] Detect copies as well as renames. See also --find-copies-harder. If n is specified, it has the same meaning as for -M<n>. --find-copies-harder For performance reasons, by default, -C option finds copies only if the original file of the copy was modified in the same changeset. This flag makes the command inspect unmodified files as candidates for the source of copy. This is a very expensive operation for large projects, so use it with caution. Giving more than one -C option has the same effect. -D, --irreversible-delete Omit the preimage for deletes, i.e. print only the header but not the diff between the preimage and /dev/null. The resulting patch is not meant to be applied with patch or git apply; this is solely for people who want to just concentrate on reviewing the text after the change. In addition, the output obviously lacks enough information to apply such a patch in reverse, even manually, hence the name of the option. When used together with -B, omit also the preimage in the deletion part of a delete/create pair. -l<num> The -M and -C options involve some preliminary steps that can detect subsets of renames/copies cheaply, followed by an exhaustive fallback portion that compares all remaining unpaired destinations to all relevant sources. (For renames, only remaining unpaired sources are relevant; for copies, all original sources are relevant.) For N sources and destinations, this exhaustive check is O(N^2). This option prevents the exhaustive portion of rename/copy detection from running if the number of source/destination files involved exceeds the specified number. Defaults to diff.renameLimit. Note that a value of 0 is treated as unlimited. -O<orderfile> Control the order in which files appear in the output. This overrides the diff.orderFile configuration variable (see git-config(1)). To cancel diff.orderFile, use -O/dev/null. The output order is determined by the order of glob patterns in <orderfile>. All files with pathnames that match the first pattern are output first, all files with pathnames that match the second pattern (but not the first) are output next, and so on. All files with pathnames that do not match any pattern are output last, as if there was an implicit match-all pattern at the end of the file. If multiple pathnames have the same rank (they match the same pattern but no earlier patterns), their output order relative to each other is the normal order. <orderfile> is parsed as follows: • Blank lines are ignored, so they can be used as separators for readability. • Lines starting with a hash ("#") are ignored, so they can be used for comments. Add a backslash ("\") to the beginning of the pattern if it starts with a hash. • Each other line contains a single pattern. Patterns have the same syntax and semantics as patterns used for fnmatch(3) without the FNM_PATHNAME flag, except a pathname also matches a pattern if removing any number of the final pathname components matches the pattern. For example, the pattern "foo*bar" matches "fooasdfbar" and "foo/bar/baz/asdf" but not "foobarx". --skip-to=<file>, --rotate-to=<file> Discard the files before the named <file> from the output (i.e. skip to), or move them to the end of the output (i.e. rotate to). These were invented primarily for use of the git difftool command, and may not be very useful otherwise. --relative[=<path>], --no-relative When run from a subdirectory of the project, it can be told to exclude changes outside the directory and show pathnames relative to it with this option. When you are not in a subdirectory (e.g. in a bare repository), you can name which subdirectory to make the output relative to by giving a <path> as an argument. --no-relative can be used to countermand both diff.relative config option and previous --relative. -a, --text Treat all files as text. --ignore-cr-at-eol Ignore carriage-return at the end of line when doing a comparison. --ignore-space-at-eol Ignore changes in whitespace at EOL. -b, --ignore-space-change Ignore changes in amount of whitespace. This ignores whitespace at line end, and considers all other sequences of one or more whitespace characters to be equivalent. -w, --ignore-all-space Ignore whitespace when comparing lines. This ignores differences even if one line has whitespace where the other line has none. --ignore-blank-lines Ignore changes whose lines are all blank. -I<regex>, --ignore-matching-lines=<regex> Ignore changes whose all lines match <regex>. This option may be specified more than once. --inter-hunk-context=<lines> Show the context between diff hunks, up to the specified number of lines, thereby fusing hunks that are close to each other. Defaults to diff.interHunkContext or 0 if the config option is unset. -W, --function-context Show whole function as context lines for each change. The function names are determined in the same way as git diff works out patch hunk headers (see Defining a custom hunk-header in gitattributes(5)). --ext-diff Allow an external diff helper to be executed. If you set an external diff driver with gitattributes(5), you need to use this option with git-log(1) and friends. --no-ext-diff Disallow external diff drivers. --textconv, --no-textconv Allow (or disallow) external text conversion filters to be run when comparing binary files. See gitattributes(5) for details. Because textconv filters are typically a one-way conversion, the resulting diff is suitable for human consumption, but cannot be applied. For this reason, textconv filters are enabled by default only for git-diff(1) and git-log(1), but not for git-format-patch(1) or diff plumbing commands. --ignore-submodules[=<when>] Ignore changes to submodules in the diff generation. <when> can be either "none", "untracked", "dirty" or "all", which is the default. Using "none" will consider the submodule modified when it either contains untracked or modified files or its HEAD differs from the commit recorded in the superproject and can be used to override any settings of the ignore option in git-config(1) or gitmodules(5). When "untracked" is used submodules are not considered dirty when they only contain untracked content (but they are still scanned for modified content). Using "dirty" ignores all changes to the work tree of submodules, only changes to the commits stored in the superproject are shown (this was the behavior until 1.7.0). Using "all" hides all changes to submodules. --src-prefix=<prefix> Show the given source prefix instead of "a/". --dst-prefix=<prefix> Show the given destination prefix instead of "b/". --no-prefix Do not show any source or destination prefix. --default-prefix Use the default source and destination prefixes ("a/" and "b/"). This is usually the default already, but may be used to override config such as diff.noprefix. --line-prefix=<prefix> Prepend an additional prefix to every line of output. --ita-invisible-in-index By default entries added by "git add -N" appear as an existing empty file in "git diff" and a new file in "git diff --cached". This option makes the entry appear as a new file in "git diff" and non-existent in "git diff --cached". This option could be reverted with --ita-visible-in-index. Both options are experimental and could be removed in future. For more detailed explanation on these common options, see also gitdiffcore(7). -<n> Prepare patches from the topmost <n> commits. -o <dir>, --output-directory <dir> Use <dir> to store the resulting files, instead of the current working directory. -n, --numbered Name output in [PATCH n/m] format, even with a single patch. -N, --no-numbered Name output in [PATCH] format. --start-number <n> Start numbering the patches at <n> instead of 1. --numbered-files Output file names will be a simple number sequence without the default first line of the commit appended. -k, --keep-subject Do not strip/add [PATCH] from the first line of the commit log message. -s, --signoff Add a Signed-off-by trailer to the commit message, using the committer identity of yourself. See the signoff option in git-commit(1) for more information. --stdout Print all commits to the standard output in mbox format, instead of creating a file for each one. --attach[=<boundary>] Create multipart/mixed attachment, the first part of which is the commit message and the patch itself in the second part, with Content-Disposition: attachment. --no-attach Disable the creation of an attachment, overriding the configuration setting. --inline[=<boundary>] Create multipart/mixed attachment, the first part of which is the commit message and the patch itself in the second part, with Content-Disposition: inline. --thread[=<style>], --no-thread Controls addition of In-Reply-To and References headers to make the second and subsequent mails appear as replies to the first. Also controls generation of the Message-ID header to reference. The optional <style> argument can be either shallow or deep. shallow threading makes every mail a reply to the head of the series, where the head is chosen from the cover letter, the --in-reply-to, and the first patch mail, in this order. deep threading makes every mail a reply to the previous one. The default is --no-thread, unless the format.thread configuration is set. --thread without an argument is equivalent to --thread=shallow. Beware that the default for git send-email is to thread emails itself. If you want git format-patch to take care of threading, you will want to ensure that threading is disabled for git send-email. --in-reply-to=<message id> Make the first mail (or all the mails with --no-thread) appear as a reply to the given <message id>, which avoids breaking threads to provide a new patch series. --ignore-if-in-upstream Do not include a patch that matches a commit in <until>..<since>. This will examine all patches reachable from <since> but not from <until> and compare them with the patches being generated, and any patch that matches is ignored. --always Include patches for commits that do not introduce any change, which are omitted by default. --cover-from-description=<mode> Controls which parts of the cover letter will be automatically populated using the branch’s description. If <mode> is message or default, the cover letter subject will be populated with placeholder text. The body of the cover letter will be populated with the branch’s description. This is the default mode when no configuration nor command line option is specified. If <mode> is subject, the first paragraph of the branch description will populate the cover letter subject. The remainder of the description will populate the body of the cover letter. If <mode> is auto, if the first paragraph of the branch description is greater than 100 bytes, then the mode will be message, otherwise subject will be used. If <mode> is none, both the cover letter subject and body will be populated with placeholder text. --subject-prefix=<subject prefix> Instead of the standard [PATCH] prefix in the subject line, instead use [<subject prefix>]. This allows for useful naming of a patch series, and can be combined with the --numbered option. --filename-max-length=<n> Instead of the standard 64 bytes, chomp the generated output filenames at around <n> bytes (too short a value will be silently raised to a reasonable length). Defaults to the value of the format.filenameMaxLength configuration variable, or 64 if unconfigured. --rfc Alias for --subject-prefix="RFC PATCH". RFC means "Request For Comments"; use this when sending an experimental patch for discussion rather than application. -v <n>, --reroll-count=<n> Mark the series as the <n>-th iteration of the topic. The output filenames have v<n> prepended to them, and the subject prefix ("PATCH" by default, but configurable via the --subject-prefix option) has ` v<n>` appended to it. E.g. --reroll-count=4 may produce v4-0001-add-makefile.patch file that has "Subject: [PATCH v4 1/20] Add makefile" in it. <n> does not have to be an integer (e.g. "--reroll-count=4.4", or "--reroll-count=4rev2" are allowed), but the downside of using such a reroll-count is that the range-diff/interdiff with the previous version does not state exactly which version the new iteration is compared against. --to=<email> Add a To: header to the email headers. This is in addition to any configured headers, and may be used multiple times. The negated form --no-to discards all To: headers added so far (from config or command line). --cc=<email> Add a Cc: header to the email headers. This is in addition to any configured headers, and may be used multiple times. The negated form --no-cc discards all Cc: headers added so far (from config or command line). --from, --from=<ident> Use ident in the From: header of each commit email. If the author ident of the commit is not textually identical to the provided ident, place a From: header in the body of the message with the original author. If no ident is given, use the committer ident. Note that this option is only useful if you are actually sending the emails and want to identify yourself as the sender, but retain the original author (and git am will correctly pick up the in-body header). Note also that git send-email already handles this transformation for you, and this option should not be used if you are feeding the result to git send-email. --[no-]force-in-body-from With the e-mail sender specified via the --from option, by default, an in-body "From:" to identify the real author of the commit is added at the top of the commit log message if the sender is different from the author. With this option, the in-body "From:" is added even when the sender and the author have the same name and address, which may help if the mailing list software mangles the sender’s identity. Defaults to the value of the format.forceInBodyFrom configuration variable. --add-header=<header> Add an arbitrary header to the email headers. This is in addition to any configured headers, and may be used multiple times. For example, --add-header="Organization: git-foo". The negated form --no-add-header discards all (To:, Cc:, and custom) headers added so far from config or command line. --[no-]cover-letter In addition to the patches, generate a cover letter file containing the branch description, shortlog and the overall diffstat. You can fill in a description in the file before sending it out. --encode-email-headers, --no-encode-email-headers Encode email headers that have non-ASCII characters with "Q-encoding" (described in RFC 2047), instead of outputting the headers verbatim. Defaults to the value of the format.encodeEmailHeaders configuration variable. --interdiff=<previous> As a reviewer aid, insert an interdiff into the cover letter, or as commentary of the lone patch of a 1-patch series, showing the differences between the previous version of the patch series and the series currently being formatted. previous is a single revision naming the tip of the previous series which shares a common base with the series being formatted (for example git format-patch --cover-letter --interdiff=feature/v1 -3 feature/v2). --range-diff=<previous> As a reviewer aid, insert a range-diff (see git-range-diff(1)) into the cover letter, or as commentary of the lone patch of a 1-patch series, showing the differences between the previous version of the patch series and the series currently being formatted. previous can be a single revision naming the tip of the previous series if it shares a common base with the series being formatted (for example git format-patch --cover-letter --range-diff=feature/v1 -3 feature/v2), or a revision range if the two versions of the series are disjoint (for example git format-patch --cover-letter --range-diff=feature/v1~3..feature/v1 -3 feature/v2). Note that diff options passed to the command affect how the primary product of format-patch is generated, and they are not passed to the underlying range-diff machinery used to generate the cover-letter material (this may change in the future). --creation-factor=<percent> Used with --range-diff, tweak the heuristic which matches up commits between the previous and current series of patches by adjusting the creation/deletion cost fudge factor. See git-range-diff(1)) for details. --notes[=<ref>], --no-notes Append the notes (see git-notes(1)) for the commit after the three-dash line. The expected use case of this is to write supporting explanation for the commit that does not belong to the commit log message proper, and include it with the patch submission. While one can simply write these explanations after format-patch has run but before sending, keeping them as Git notes allows them to be maintained between versions of the patch series (but see the discussion of the notes.rewrite configuration options in git-notes(1) to use this workflow). The default is --no-notes, unless the format.notes configuration is set. --[no-]signature=<signature> Add a signature to each message produced. Per RFC 3676 the signature is separated from the body by a line with '-- ' on it. If the signature option is omitted the signature defaults to the Git version number. --signature-file=<file> Works just like --signature except the signature is read from a file. --suffix=.<sfx> Instead of using .patch as the suffix for generated filenames, use specified suffix. A common alternative is --suffix=.txt. Leaving this empty will remove the .patch suffix. Note that the leading character does not have to be a dot; for example, you can use --suffix=-patch to get 0001-description-of-my-change-patch. -q, --quiet Do not print the names of the generated files to standard output. --no-binary Do not output contents of changes in binary files, instead display a notice that those files changed. Patches generated using this option cannot be applied properly, but they are still useful for code review. --zero-commit Output an all-zero hash in each patch’s From header instead of the hash of the commit. --[no-]base[=<commit>] Record the base tree information to identify the state the patch series applies to. See the BASE TREE INFORMATION section below for details. If <commit> is "auto", a base commit is automatically chosen. The --no-base option overrides a format.useAutoBase configuration. --root Treat the revision argument as a <revision range>, even if it is just a single commit (that would normally be treated as a <since>). Note that root commits included in the specified range are always formatted as creation patches, independently of this flag. --progress Show progress reports on stderr as patches are generated.
# git format-patch > Prepare .patch files. Useful when emailing commits elsewhere. See also `git > am`, which can apply generated .patch files. More information: https://git- > scm.com/docs/git-format-patch. * Create an auto-named `.patch` file for all the unpushed commits: `git format-patch {{origin}}` * Write a `.patch` file for all the commits between 2 revisions to `stdout`: `git format-patch {{revision_1}}..{{revision_2}}` * Write a `.patch` file for the 3 latest commits: `git format-patch -{{3}}`
false
Exit with a status code indicating failure. --help display this help and exit --version output version information and exit NOTE: your shell may have its own version of false, which usually supersedes the version described here. Please refer to your shell's documentation for details about the options it supports.
# false > Returns a non-zero exit code. More information: > https://www.gnu.org/software/coreutils/false. * Return a non-zero exit code: `false`
iconv
The iconv utility shall convert the encoding of characters in file from one codeset to another and write the results to standard output. When the options indicate that charmap files are used to specify the codesets (see OPTIONS), the codeset conversion shall be accomplished by performing a logical join on the symbolic character names in the two charmaps. The implementation need not support the use of charmap files for codeset conversion unless the POSIX2_LOCALEDEF symbol is defined on the system. The iconv utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following options shall be supported: -c Omit any characters that are invalid in the codeset of the input file from the output. When -c is not used, the results of encountering invalid characters in the input stream (either those that are not characters in the codeset of the input file or that have no corresponding character in the codeset of the output file) shall be specified in the system documentation. The presence or absence of -c shall not affect the exit status of iconv. -f fromcodeset Identify the codeset of the input file. The implementation shall recognize the following two forms of the fromcodeset option-argument: fromcode The fromcode option-argument must not contain a <slash> character. It shall be interpreted as the name of one of the codeset descriptions provided by the implementation in an unspecified format. Valid values of fromcode are implementation-defined. frommap The frommap option-argument must contain a <slash> character. It shall be interpreted as the pathname of a charmap file as defined in the Base Definitions volume of POSIX.1‐2017, Section 6.4, Character Set Description File. If the pathname does not represent a valid, readable charmap file, the results are undefined. If this option is omitted, the codeset of the current locale shall be used. -l Write all supported fromcode and tocode values to standard output in an unspecified format. -s Suppress any messages written to standard error concerning invalid characters. When -s is not used, the results of encountering invalid characters in the input stream (either those that are not valid characters in the codeset of the input file or that have no corresponding character in the codeset of the output file) shall be specified in the system documentation. The presence or absence of -s shall not affect the exit status of iconv. -t tocodeset Identify the codeset to be used for the output file. The implementation shall recognize the following two forms of the tocodeset option-argument: tocode The semantics shall be equivalent to the -f fromcode option. tomap The semantics shall be equivalent to the -f frommap option. If this option is omitted, the codeset of the current locale shall be used. If either -f or -t represents a charmap file, but the other does not (or is omitted), or both -f and -t are omitted, the results are undefined.
# iconv > Converts text from one encoding to another. More information: > https://manned.org/iconv. * Convert file to a specific encoding, and print to `stdout`: `iconv -f {{from_encoding}} -t {{to_encoding}} {{input_file}}` * Convert file to the current locale's encoding, and output to a file: `iconv -f {{from_encoding}} {{input_file}} > {{output_file}}` * List supported encodings: `iconv -l`
sync
Synchronize cached writes to persistent storage If one or more files are specified, sync only them, or their containing file systems. -d, --data sync only file data, no unneeded metadata -f, --file-system sync the file systems that contain the files --help display this help and exit --version output version information and exit
# sync > Flushes all pending write operations to the appropriate disks. More > information: https://www.gnu.org/software/coreutils/sync. * Flush all pending write operations on all disks: `sync` * Flush all pending write operations on a single file to disk: `sync {{path/to/file}}`
diff
The diff utility shall compare the contents of file1 and file2 and write to standard output a list of changes necessary to convert file1 into file2. This list should be minimal. No output shall be produced if the files are identical. The diff utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following options shall be supported: -b Cause any amount of white space at the end of a line to be treated as a single <newline> (that is, the white- space characters preceding the <newline> are ignored) and other strings of white-space characters, not including <newline> characters, to compare equal. -c Produce output in a form that provides three lines of copied context. -C n Produce output in a form that provides n lines of copied context (where n shall be interpreted as a positive decimal integer). -e Produce output in a form suitable as input for the ed utility, which can then be used to convert file1 into file2. -f Produce output in an alternative form, similar in format to -e, but not intended to be suitable as input for the ed utility, and in the opposite order. -r Apply diff recursively to files and directories of the same name when file1 and file2 are both directories. The diff utility shall detect infinite loops; that is, entering a previously visited directory that is an ancestor of the last file encountered. When it detects an infinite loop, diff shall write a diagnostic message to standard error and shall either recover its position in the hierarchy or terminate. -u Produce output in a form that provides three lines of unified context. -U n Produce output in a form that provides n lines of unified context (where n shall be interpreted as a non- negative decimal integer).
# diff > Compare files and directories. More information: https://man7.org/linux/man- > pages/man1/diff.1.html. * Compare files (lists changes to turn `old_file` into `new_file`): `diff {{old_file}} {{new_file}}` * Compare files, ignoring white spaces: `diff --ignore-all-space {{old_file}} {{new_file}}` * Compare files, showing the differences side by side: `diff --side-by-side {{old_file}} {{new_file}}` * Compare files, showing the differences in unified format (as used by `git diff`): `diff --unified {{old_file}} {{new_file}}` * Compare directories recursively (shows names for differing files/directories as well as changes made to files): `diff --recursive {{old_directory}} {{new_directory}}` * Compare directories, only showing the names of files that differ: `diff --recursive --brief {{old_directory}} {{new_directory}}` * Create a patch file for Git from the differences of two text files, treating nonexistent files as empty: `diff --text --unified --new-file {{old_file}} {{new_file}} > {{diff.patch}}`
rmdir
Remove the DIRECTORY(ies), if they are empty. --ignore-fail-on-non-empty ignore each failure to remove a non-empty directory -p, --parents remove DIRECTORY and its ancestors; e.g., 'rmdir -p a/b' is similar to 'rmdir a/b a' -v, --verbose output a diagnostic for every directory processed --help display this help and exit --version output version information and exit
# rmdir > Remove directories without files. See also: `rm`. More information: > https://www.gnu.org/software/coreutils/rmdir. * Remove specific directories: `rmdir {{path/to/directory1 path/to/directory2 ...}}` * Remove specific nested directories recursively: `rmdir -p {{path/to/directory1 path/to/directory2 ...}}`
shuf
Write a random permutation of the input lines to standard output. With no FILE, or when FILE is -, read standard input. Mandatory arguments to long options are mandatory for short options too. -e, --echo treat each ARG as an input line -i, --input-range=LO-HI treat each number LO through HI as an input line -n, --head-count=COUNT output at most COUNT lines -o, --output=FILE write result to FILE instead of standard output --random-source=FILE get random bytes from FILE -r, --repeat output lines can be repeated -z, --zero-terminated line delimiter is NUL, not newline --help display this help and exit --version output version information and exit
# shuf > Generate random permutations. More information: https://www.unix.com/man- > page/linux/1/shuf/. * Randomize the order of lines in a file and output the result: `shuf {{filename}}` * Only output the first 5 entries of the result: `shuf --head-count={{5}} {{filename}}` * Write output to another file: `shuf {{filename}} --output={{output_filename}}` * Generate random numbers in range 1-10: `shuf --input-range={{1-10}}`
git-bundle
Create, unpack, and manipulate "bundle" files. Bundles are used for the "offline" transfer of Git objects without an active "server" sitting on the other side of the network connection. They can be used to create both incremental and full backups of a repository, and to relay the state of the references in one repository to another. Git commands that fetch or otherwise "read" via protocols such as ssh:// and https:// can also operate on bundle files. It is possible git-clone(1) a new repository from a bundle, to use git-fetch(1) to fetch from one, and to list the references contained within it with git-ls-remote(1). There’s no corresponding "write" support, i.e.a git push into a bundle is not supported. See the "EXAMPLES" section below for examples of how to use bundles. create [options] <file> <git-rev-list-args> Used to create a bundle named file. This requires the <git-rev-list-args> arguments to define the bundle contents. options contains the options specific to the git bundle create subcommand. If file is -, the bundle is written to stdout. verify <file> Used to check that a bundle file is valid and will apply cleanly to the current repository. This includes checks on the bundle format itself as well as checking that the prerequisite commits exist and are fully linked in the current repository. Then, git bundle prints a list of missing commits, if any. Finally, information about additional capabilities, such as "object filter", is printed. See "Capabilities" in gitformat-bundle(5) for more information. The exit code is zero for success, but will be nonzero if the bundle file is invalid. If file is -, the bundle is read from stdin. list-heads <file> Lists the references defined in the bundle. If followed by a list of references, only references matching those given are printed out. If file is -, the bundle is read from stdin. unbundle <file> Passes the objects in the bundle to git index-pack for storage in the repository, then prints the names of all defined references. If a list of references is given, only references matching those in the list are printed. This command is really plumbing, intended to be called only by git fetch. If file is -, the bundle is read from stdin. <git-rev-list-args> A list of arguments, acceptable to git rev-parse and git rev-list (and containing a named ref, see SPECIFYING REFERENCES below), that specifies the specific objects and references to transport. For example, master~10..master causes the current master reference to be packaged along with all objects added since its 10th ancestor commit. There is no explicit limit to the number of references and objects that may be packaged. [<refname>...] A list of references used to limit the references reported as available. This is principally of use to git fetch, which expects to receive only those references asked for and not necessarily everything in the pack (in this case, git bundle acts like git fetch-pack). --progress Progress status is reported on the standard error stream by default when it is attached to a terminal, unless -q is specified. This flag forces progress status even if the standard error stream is not directed to a terminal. --version=<version> Specify the bundle version. Version 2 is the older format and can only be used with SHA-1 repositories; the newer version 3 contains capabilities that permit extensions. The default is the oldest supported format, based on the hash algorithm in use. -q, --quiet This flag makes the command not to report its progress on the standard error stream.
# git bundle > Package objects and references into an archive. More information: > https://git-scm.com/docs/git-bundle. * Create a bundle file that contains all objects and references of a specific branch: `git bundle create {{path/to/file.bundle}} {{branch_name}}` * Create a bundle file of all branches: `git bundle create {{path/to/file.bundle}} --all` * Create a bundle file of the last 5 commits of the current branch: `git bundle create {{path/to/file.bundle}} -{{5}} {{HEAD}}` * Create a bundle file of the latest 7 days: `git bundle create {{path/to/file.bundle}} --since={{7.days}} {{HEAD}}` * Verify that a bundle file is valid and can be applied to the current repository: `git bundle verify {{path/to/file.bundle}}` * Print to `stdout` the list of references contained in a bundle: `git bundle unbundle {{path/to/file.bundle}}` * Unbundle a specific branch from a bundle file into the current repository: `git pull {{path/to/file.bundle}} {{branch_name}}`
link
Call the link function to create a link named FILE2 to an existing FILE1. --help display this help and exit --version output version information and exit
# link > Create a hard link to an existing file. For more options, see the `ln` > command. More information: https://www.gnu.org/software/coreutils/link. * Create a hard link from a new file to an existing file: `link {{path/to/existing_file}} {{path/to/new_file}}`
systemd-delta
systemd-delta may be used to identify and compare configuration files that override other configuration files. Files in /etc/ have highest priority, files in /run/ have the second highest priority, ..., files in /usr/lib/ have lowest priority. Files in a directory with higher priority override files with the same name in directories of lower priority. In addition, certain configuration files can have ".d" directories which contain "drop-in" files with configuration snippets which augment the main configuration file. "Drop-in" files can be overridden in the same way by placing files with the same name in a directory of higher priority (except that, in case of "drop-in" files, both the "drop-in" file name and the name of the containing directory, which corresponds to the name of the main configuration file, must match). For a fuller explanation, see systemd.unit(5). The command line argument will be split into a prefix and a suffix. Either is optional. The prefix must be one of the directories containing configuration files (/etc/, /run/, /usr/lib/, ...). If it is given, only overriding files contained in this directory will be shown. Otherwise, all overriding files will be shown. The suffix must be a name of a subdirectory containing configuration files like tmpfiles.d, sysctl.d or systemd/system. If it is given, only configuration files in this subdirectory (across all configuration paths) will be analyzed. Otherwise, all configuration files will be analyzed. If the command line argument is not given at all, all configuration files will be analyzed. See below for some examples. The following options are understood: -t, --type= When listing the differences, only list those that are asked for. The list itself is a comma-separated list of desired difference types. Recognized types are: masked Show masked files equivalent Show overridden files that while overridden, do not differ in content. redirected Show files that are redirected to another. overridden Show overridden, and changed files. extended Show *.conf files in drop-in directories for units. unchanged Show unmodified files too. --diff= When showing modified files, when a file is overridden show a diff as well. This option takes a boolean argument. If omitted, it defaults to true. -h, --help Print a short help text and exit. --version Print a short version string and exit. --no-pager Do not pipe output into a pager.
# systemd-delta > Find overridden systemd-related configuration files. More information: > https://www.freedesktop.org/software/systemd/man/systemd-delta.html. * Show all overridden configuration files: `systemd-delta` * Show only files of specific types (comma-separated list): `systemd-delta --type {{masked|equivalent|redirected|overridden|extended|unchanged}}` * Show only files whose path starts with the specified prefix (Note: a prefix is a directory containing subdirectories with systemd configuration files): `systemd-delta {{/etc|/run|/usr/lib|...}}` * Further restrict the search path by adding a suffix (the prefix is optional): `systemd-delta {{prefix}}/{{tmpfiles.d|sysctl.d|systemd/system|...}}`
namei
namei interprets its arguments as pathnames to any type of Unix file (symlinks, files, directories, and so forth). namei then follows each pathname until an endpoint is found (a file, a directory, a device node, etc). If it finds a symbolic link, it shows the link, and starts following it, indenting the output to show the context. This program is useful for finding "too many levels of symbolic links" problems. For each line of output, namei uses the following characters to identify the file type found: f: = the pathname currently being resolved d = directory l = symbolic link (both the link and its contents are output) s = socket b = block device c = character device p = FIFO (named pipe) - = regular file ? = an error of some kind namei prints an informative message when the maximum number of symbolic links this system can have has been exceeded. -l, --long Use the long listing format (same as -m -o -v). -m, --modes Show the mode bits of each file type in the style of ls(1), for example 'rwxr-xr-x'. -n, --nosymlinks Don’t follow symlinks. -o, --owners Show owner and group name of each file. -v, --vertical Vertically align the modes and owners. -x, --mountpoints Show mountpoint directories with a 'D' rather than a 'd'. -Z, --context Show security context of the file or "?" if not available. The support for security contexts is optional and does not have to be compiled to the namei binary. -h, --help Display help text and exit. -V, --version Print version and exit.
# namei > Follows a pathname (which can be a symbolic link) until a terminal point is > found (a file/directory/char device etc). This program is useful for finding > "too many levels of symbolic links" problems. More information: > https://manned.org/namei. * Resolve the pathnames specified as the argument parameters: `namei {{path/to/a}} {{path/to/b}} {{path/to/c}}` * Display the results in a long-listing format: `namei --long {{path/to/a}} {{path/to/b}} {{path/to/c}}` * Show the mode bits of each file type in the style of `ls`: `namei --modes {{path/to/a}} {{path/to/b}} {{path/to/c}}` * Show owner and group name of each file: `namei --owners {{path/to/a}} {{path/to/b}} {{path/to/c}}` * Don't follow symlinks while resolving: `namei --nosymlinks {{path/to/a}} {{path/to/b}} {{path/to/c}}`
lastcomm
lastcomm prints out information about previously executed commands. If no arguments are specified, lastcomm will print info about all of the commands in acct (the record file). If called with one or more of command-name, user-name, or terminal- name, only records containing those items will be displayed. For example, to find out which users used command `a.out' and which users were logged into `tty0', type: lastcomm a.out tty0 This will print any entry for which `a.out' or `tty0' matches in any of the record's fields (command, name, or terminal). If you want to find only items that match *all* of the arguments on the command line, you must use the '-strict-match' option. For example, to list all of the executions of command a.out by user root on terminal tty0, type: lastcomm --strict-match --command a.out --user root --tty tty0 The order of the arguments is not important. For each entry the following information is printed: + command name of the process + flags, as recorded by the system accounting routines: S -- command executed by super-user F -- command executed after a fork but without a following exec C -- command run in PDP-11 compatibility mode (VAX only) D -- command terminated with the generation of a core file X -- command was terminated with the signal SIGTERM + the name of the user who ran the process + time the process started --strict-match Print only entries that match *all* of the arguments on the command line. --print-controls Print control characters. --user name List records for user with name. This is useful if you're trying to match a username that happens to be the same as a command (e.g., ed ). --command name List records for command name. --tty name List records for tty name. --forwards Read file forwards instead of backwards. This avoids trying to seek on the file and can be used to read from a pipe. This must be specified prior to any -f arguments. -f filename, --file filename Read from the file filename instead of acct. A filename of "-" will result in reading from stdin. This must either be the first -f option, or --forwards must precede all -f options. --ahz hz Use this flag to tell the program what AHZ should be (in hertz). This option is useful if you are trying to view an acct file created on another machine which has the same byte order and file format as your current machine, but has a different value for AHZ. -p, --show-paging Print paging statistics. --pid Show PID and PPID of the process if acct version 3 format is supported by kernel. --pid Add pid of the process and pid of the process parent to the output (pid is the last but one and parent pid the last column). These values are shown only when they are generated by acct function (depends on the version of kernel) --debug Print verbose internal information. -V, --version Print the version number of lastcomm. -h, --help Prints the usage string and default locations of system files to standard output and exits.
# lastcomm > Show last commands executed. More information: > https://manpages.debian.org/latest/acct/lastcomm.1.en.html. * Print information about all the commands in the acct (record file): `lastcomm` * Display commands executed by a given user: `lastcomm --user {{user}}` * Display information about a given command executed on the system: `lastcomm --command {{command}}` * Display information about commands executed on a given terminal: `lastcomm --tty {{terminal_name}}`
egrep
grep searches for PATTERNS in each FILE. PATTERNS is one or more patterns separated by newline characters, and grep prints each line that matches a pattern. Typically PATTERNS should be quoted when grep is used in a shell command. A FILE of “-” stands for standard input. If no FILE is given, recursive searches examine the working directory, and nonrecursive searches read standard input. Generic Program Information --help Output a usage message and exit. -V, --version Output the version number of grep and exit. Pattern Syntax -E, --extended-regexp Interpret PATTERNS as extended regular expressions (EREs, see below). -F, --fixed-strings Interpret PATTERNS as fixed strings, not regular expressions. -G, --basic-regexp Interpret PATTERNS as basic regular expressions (BREs, see below). This is the default. -P, --perl-regexp Interpret PATTERNS as Perl-compatible regular expressions (PCREs). This option is experimental when combined with the -z (--null-data) option, and grep -P may warn of unimplemented features. Matching Control -e PATTERNS, --regexp=PATTERNS Use PATTERNS as the patterns. If this option is used multiple times or is combined with the -f (--file) option, search for all patterns given. This option can be used to protect a pattern beginning with “-”. -f FILE, --file=FILE Obtain patterns from FILE, one per line. If this option is used multiple times or is combined with the -e (--regexp) option, search for all patterns given. The empty file contains zero patterns, and therefore matches nothing. If FILE is - , read patterns from standard input. -i, --ignore-case Ignore case distinctions in patterns and input data, so that characters that differ only in case match each other. --no-ignore-case Do not ignore case distinctions in patterns and input data. This is the default. This option is useful for passing to shell scripts that already use -i, to cancel its effects because the two options override each other. -v, --invert-match Invert the sense of matching, to select non-matching lines. -w, --word-regexp Select only those lines containing matches that form whole words. The test is that the matching substring must either be at the beginning of the line, or preceded by a non-word constituent character. Similarly, it must be either at the end of the line or followed by a non-word constituent character. Word-constituent characters are letters, digits, and the underscore. This option has no effect if -x is also specified. -x, --line-regexp Select only those matches that exactly match the whole line. For a regular expression pattern, this is like parenthesizing the pattern and then surrounding it with ^ and $. General Output Control -c, --count Suppress normal output; instead print a count of matching lines for each input file. With the -v, --invert-match option (see above), count non-matching lines. --color[=WHEN], --colour[=WHEN] Surround the matched (non-empty) strings, matching lines, context lines, file names, line numbers, byte offsets, and separators (for fields and groups of context lines) with escape sequences to display them in color on the terminal. The colors are defined by the environment variable GREP_COLORS. WHEN is never, always, or auto. -L, --files-without-match Suppress normal output; instead print the name of each input file from which no output would normally have been printed. -l, --files-with-matches Suppress normal output; instead print the name of each input file from which output would normally have been printed. Scanning each input file stops upon first match. -m NUM, --max-count=NUM Stop reading a file after NUM matching lines. If NUM is zero, grep stops right away without reading input. A NUM of -1 is treated as infinity and grep does not stop; this is the default. If the input is standard input from a regular file, and NUM matching lines are output, grep ensures that the standard input is positioned to just after the last matching line before exiting, regardless of the presence of trailing context lines. This enables a calling process to resume a search. When grep stops after NUM matching lines, it outputs any trailing context lines. When the -c or --count option is also used, grep does not output a count greater than NUM. When the -v or --invert-match option is also used, grep stops after outputting NUM non-matching lines. -o, --only-matching Print only the matched (non-empty) parts of a matching line, with each such part on a separate output line. -q, --quiet, --silent Quiet; do not write anything to standard output. Exit immediately with zero status if any match is found, even if an error was detected. Also see the -s or --no-messages option. -s, --no-messages Suppress error messages about nonexistent or unreadable files. Output Line Prefix Control -b, --byte-offset Print the 0-based byte offset within the input file before each line of output. If -o (--only-matching) is specified, print the offset of the matching part itself. -H, --with-filename Print the file name for each match. This is the default when there is more than one file to search. This is a GNU extension. -h, --no-filename Suppress the prefixing of file names on output. This is the default when there is only one file (or only standard input) to search. --label=LABEL Display input actually coming from standard input as input coming from file LABEL. This can be useful for commands that transform a file's contents before searching, e.g., gzip -cd foo.gz | grep --label=foo -H 'some pattern'. See also the -H option. -n, --line-number Prefix each line of output with the 1-based line number within its input file. -T, --initial-tab Make sure that the first character of actual line content lies on a tab stop, so that the alignment of tabs looks normal. This is useful with options that prefix their output to the actual content: -H,-n, and -b. In order to improve the probability that lines from a single file will all start at the same column, this also causes the line number and byte offset (if present) to be printed in a minimum size field width. -Z, --null Output a zero byte (the ASCII NUL character) instead of the character that normally follows a file name. For example, grep -lZ outputs a zero byte after each file name instead of the usual newline. This option makes the output unambiguous, even in the presence of file names containing unusual characters like newlines. This option can be used with commands like find -print0, perl -0, sort -z, and xargs -0 to process arbitrary file names, even those that contain newline characters. Context Line Control -A NUM, --after-context=NUM Print NUM lines of trailing context after matching lines. Places a line containing a group separator (--) between contiguous groups of matches. With the -o or --only-matching option, this has no effect and a warning is given. -B NUM, --before-context=NUM Print NUM lines of leading context before matching lines. Places a line containing a group separator (--) between contiguous groups of matches. With the -o or --only-matching option, this has no effect and a warning is given. -C NUM, -NUM, --context=NUM Print NUM lines of output context. Places a line containing a group separator (--) between contiguous groups of matches. With the -o or --only-matching option, this has no effect and a warning is given. --group-separator=SEP When -A, -B, or -C are in use, print SEP instead of -- between groups of lines. --no-group-separator When -A, -B, or -C are in use, do not print a separator between groups of lines. File and Directory Selection -a, --text Process a binary file as if it were text; this is equivalent to the --binary-files=text option. --binary-files=TYPE If a file's data or metadata indicate that the file contains binary data, assume that the file is of type TYPE. Non-text bytes indicate binary data; these are either output bytes that are improperly encoded for the current locale, or null input bytes when the -z option is not given. By default, TYPE is binary, and grep suppresses output after null input binary data is discovered, and suppresses output lines that contain improperly encoded data. When some output is suppressed, grep follows any output with a message to standard error saying that a binary file matches. If TYPE is without-match, when grep discovers null input binary data it assumes that the rest of the file does not match; this is equivalent to the -I option. If TYPE is text, grep processes a binary file as if it were text; this is equivalent to the -a option. When type is binary, grep may treat non-text bytes as line terminators even without the -z option. This means choosing binary versus text can affect whether a pattern matches a file. For example, when type is binary the pattern q$ might match q immediately followed by a null byte, even though this is not matched when type is text. Conversely, when type is binary the pattern . (period) might not match a null byte. Warning: The -a option might output binary garbage, which can have nasty side effects if the output is a terminal and if the terminal driver interprets some of it as commands. On the other hand, when reading files whose text encodings are unknown, it can be helpful to use -a or to set LC_ALL='C' in the environment, in order to find more matches even if the matches are unsafe for direct display. -D ACTION, --devices=ACTION If an input file is a device, FIFO or socket, use ACTION to process it. By default, ACTION is read, which means that devices are read just as if they were ordinary files. If ACTION is skip, devices are silently skipped. -d ACTION, --directories=ACTION If an input file is a directory, use ACTION to process it. By default, ACTION is read, i.e., read directories just as if they were ordinary files. If ACTION is skip, silently skip directories. If ACTION is recurse, read all files under each directory, recursively, following symbolic links only if they are on the command line. This is equivalent to the -r option. --exclude=GLOB Skip any command-line file with a name suffix that matches the pattern GLOB, using wildcard matching; a name suffix is either the whole name, or a trailing part that starts with a non-slash character immediately after a slash (/) in the name. When searching recursively, skip any subfile whose base name matches GLOB; the base name is the part after the last slash. A pattern can use *, ?, and [...] as wildcards, and \ to quote a wildcard or backslash character literally. --exclude-from=FILE Skip files whose base name matches any of the file-name globs read from FILE (using wildcard matching as described under --exclude). --exclude-dir=GLOB Skip any command-line directory with a name suffix that matches the pattern GLOB. When searching recursively, skip any subdirectory whose base name matches GLOB. Ignore any redundant trailing slashes in GLOB. -I Process a binary file as if it did not contain matching data; this is equivalent to the --binary-files=without-match option. --include=GLOB Search only files whose base name matches GLOB (using wildcard matching as described under --exclude). If contradictory --include and --exclude options are given, the last matching one wins. If no --include or --exclude options match, a file is included unless the first such option is --include. -r, --recursive Read all files under each directory, recursively, following symbolic links only if they are on the command line. Note that if no file operand is given, grep searches the working directory. This is equivalent to the -d recurse option. -R, --dereference-recursive Read all files under each directory, recursively. Follow all symbolic links, unlike -r. Other Options --line-buffered Use line buffering on output. This can cause a performance penalty. -U, --binary Treat the file(s) as binary. By default, under MS-DOS and MS-Windows, grep guesses whether a file is text or binary as described for the --binary-files option. If grep decides the file is a text file, it strips the CR characters from the original file contents (to make regular expressions with ^ and $ work correctly). Specifying -U overrules this guesswork, causing all files to be read and passed to the matching mechanism verbatim; if the file is a text file with CR/LF pairs at the end of each line, this will cause some regular expressions to fail. This option has no effect on platforms other than MS-DOS and MS-Windows. -z, --null-data Treat input and output data as sequences of lines, each terminated by a zero byte (the ASCII NUL character) instead of a newline. Like the -Z or --null option, this option can be used with commands like sort -z to process arbitrary file names.
# egrep > Find patterns in files using extended regular expression (supports `?`, `+`, > `{}`, `()` and `|`). More information: https://manned.org/egrep. * Search for a pattern within a file: `egrep "{{search_pattern}}" {{path/to/file}}` * Search for a pattern within multiple files: `egrep "{{search_pattern}}" {{path/to/file1}} {{path/to/file2}} {{path/to/file3}}` * Search `stdin` for a pattern: `cat {{path/to/file}} | egrep {{search_pattern}}` * Print file name and line number for each match: `egrep --with-filename --line-number "{{search_pattern}}" {{path/to/file}}` * Search for a pattern in all files recursively in a directory, ignoring binary files: `egrep --recursive --binary-files={{without-match}} "{{search_pattern}}" {{path/to/directory}}` * Search for lines that do not match a pattern: `egrep --invert-match "{{search_pattern}}" {{path/to/file}}`
setfacl
This utility sets Access Control Lists (ACLs) of files and directories. On the command line, a sequence of commands is followed by a sequence of files (which in turn can be followed by another sequence of commands, ...). The -m and -x options expect an ACL on the command line. Multiple ACL entries are separated by comma characters (`,'). The -M and -X options read an ACL from a file or from standard input. The ACL entry format is described in Section ACL ENTRIES. The --set and --set-file options set the ACL of a file or a directory. The previous ACL is replaced. ACL entries for this operation must include permissions. The -m (--modify) and -M (--modify-file) options modify the ACL of a file or directory. ACL entries for this operation must include permissions. The -x (--remove) and -X (--remove-file) options remove ACL entries. It is not an error to remove an entry which does not exist. Only ACL entries without the perms field are accepted as parameters, unless POSIXLY_CORRECT is defined. When reading from files using the -M and -X options, setfacl accepts the output getfacl produces. There is at most one ACL entry per line. After a Pound sign (`#'), everything up to the end of the line is treated as a comment. If setfacl is used on a file system which does not support ACLs, setfacl operates on the file mode permission bits. If the ACL does not fit completely in the permission bits, setfacl modifies the file mode permission bits to reflect the ACL as closely as possible, writes an error message to standard error, and returns with an exit status greater than 0. PERMISSIONS The file owner and processes capable of CAP_FOWNER are granted the right to modify ACLs of a file. This is analogous to the permissions required for accessing the file mode. (On current Linux systems, root is the only user with the CAP_FOWNER capability.) -b, --remove-all Remove all extended ACL entries. The base ACL entries of the owner, group and others are retained. -k, --remove-default Remove the Default ACL. If no Default ACL exists, no warnings are issued. -n, --no-mask Do not recalculate the effective rights mask. The default behavior of setfacl is to recalculate the ACL mask entry, unless a mask entry was explicitly given. The mask entry is set to the union of all permissions of the owning group, and all named user and group entries. (These are exactly the entries affected by the mask entry). --mask Do recalculate the effective rights mask, even if an ACL mask entry was explicitly given. (See the -n option.) -d, --default All operations apply to the Default ACL. Regular ACL entries in the input set are promoted to Default ACL entries. Default ACL entries in the input set are discarded. (A warning is issued if that happens). --restore={file|-} Restore a permission backup created by `getfacl -R' or similar. All permissions of a complete directory subtree are restored using this mechanism. If the input contains owner comments or group comments, setfacl attempts to restore the owner and owning group. If the input contains flags comments (which define the setuid, setgid, and sticky bits), setfacl sets those three bits accordingly; otherwise, it clears them. This option cannot be mixed with other options except `--test'. If the file specified is '-', then it will be read from standard input. --test Test mode. Instead of changing the ACLs of any files, the resulting ACLs are listed. -R, --recursive Apply operations to all files and directories recursively. This option cannot be mixed with `--restore'. -L, --logical Logical walk, follow symbolic links to directories. The default behavior is to follow symbolic link arguments, and skip symbolic links encountered in subdirectories. Only effective in combination with -R. This option cannot be mixed with `--restore'. -P, --physical Physical walk, do not follow symbolic links to directories. This also skips symbolic link arguments. Only effective in combination with -R. This option cannot be mixed with `--restore'. -v, --version Print the version of setfacl and exit. -h, --help Print help explaining the command line options. -- End of command line options. All remaining parameters are interpreted as file names, even if they start with a dash. - If the file name parameter is a single dash, setfacl reads a list of files from standard input. ACL ENTRIES The setfacl utility recognizes the following ACL entry formats (blanks inserted for clarity): [d[efault]:] [u[ser]:]uid [:perms] Permissions of a named user. Permissions of the file owner if uid is empty. [d[efault]:] g[roup]:gid [:perms] Permissions of a named group. Permissions of the owning group if gid is empty. [d[efault]:] m[ask][:] [:perms] Effective rights mask [d[efault]:] o[ther][:] [:perms] Permissions of others. Whitespace between delimiter characters and non-delimiter characters is ignored. Proper ACL entries including permissions are used in modify and set operations. (options -m, -M, --set and --set-file). Entries without the perms field are used for deletion of entries (options -x and -X). For uid and gid you can specify either a name or a number. Character literals may be specified with a backslash followed by the 3-digit octal digits corresponding to the ASCII code for the character (e.g., \101 for 'A'). If the name contains a literal backslash followed by 3 digits, the backslash must be escaped (i.e., \\). The perms field is a combination of characters that indicate the read (r), write (w), execute (x) permissions. Dash characters in the perms field (-) are ignored. The character X stands for the execute permission if the file is a directory or already has execute permission for some user. Alternatively, the perms field can define the permissions numerically, as a bit-wise combination of read (4), write (2), and execute (1). Zero perms fields or perms fields that only consist of dashes indicate no permissions. AUTOMATICALLY CREATED ENTRIES Initially, files and directories contain only the three base ACL entries for the owner, the group, and others. There are some rules that need to be satisfied in order for an ACL to be valid: * The three base entries cannot be removed. There must be exactly one entry of each of these base entry types. * Whenever an ACL contains named user entries or named group objects, it must also contain an effective rights mask. * Whenever an ACL contains any Default ACL entries, the three Default ACL base entries (default owner, default group, and default others) must also exist. * Whenever a Default ACL contains named user entries or named group objects, it must also contain a default effective rights mask. To help the user ensure these rules, setfacl creates entries from existing entries under the following conditions: * If an ACL contains named user or named group entries, and no mask entry exists, a mask entry containing the same permissions as the group entry is created. Unless the -n option is given, the permissions of the mask entry are further adjusted to include the union of all permissions affected by the mask entry. (See the -n option description). * If a Default ACL entry is created, and the Default ACL contains no owner, owning group, or others entry, a copy of the ACL owner, owning group, or others entry is added to the Default ACL. * If a Default ACL contains named user entries or named group entries, and no mask entry exists, a mask entry containing the same permissions as the default Default ACL's group entry is added. Unless the -n option is given, the permissions of the mask entry are further adjusted to include the union of all permissions affected by the mask entry. (See the -n option description).
# setfacl > Set file access control lists (ACL). More information: > https://manned.org/setfacl. * Modify ACL of a file for user with read and write access: `setfacl -m u:{{username}}:rw {{file}}` * Modify default ACL of a file for all users: `setfacl -d -m u::rw {{file}}` * Remove ACL of a file for a user: `setfacl -x u:{{username}} {{file}}` * Remove all ACL entries of a file: `setfacl -b {{file}}`
paste
The paste utility shall concatenate the corresponding lines of the given input files, and write the resulting lines to standard output. The default operation of paste shall concatenate the corresponding lines of the input files. The <newline> of every line except the line from the last input file shall be replaced with a <tab>. If an end-of-file condition is detected on one or more input files, but not all input files, paste shall behave as though empty lines were read from the files on which end-of-file was detected, unless the -s option is specified. The paste utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following options shall be supported: -d list Unless a <backslash> character appears in list, each character in list is an element specifying a delimiter character. If a <backslash> character appears in list, the <backslash> character and one or more characters following it are an element specifying a delimiter character as described below. These elements specify one or more delimiters to use, instead of the default <tab>, to replace the <newline> of the input lines. The elements in list shall be used circularly; that is, when the list is exhausted the first element from the list is reused. When the -s option is specified: * The last <newline> in a file shall not be modified. * The delimiter shall be reset to the first element of list after each file operand is processed. When the -s option is not specified: * The <newline> characters in the file specified by the last file operand shall not be modified. * The delimiter shall be reset to the first element of list each time a line is processed from each file. If a <backslash> character appears in list, it and the character following it shall be used to represent the following delimiter characters: \n <newline>. \t <tab>. \\ <backslash> character. \0 Empty string (not a null character). If '\0' is immediately followed by the character 'x', the character 'X', or any character defined by the LC_CTYPE digit keyword (see the Base Definitions volume of POSIX.1‐2017, Chapter 7, Locale), the results are unspecified. If any other characters follow the <backslash>, the results are unspecified. -s Concatenate all of the lines from each input file into one line of output per file, in command line order. The <newline> of every line except the last line in each input file shall be replaced with a <tab>, unless otherwise specified by the -d option. If an input file is empty, the output line corresponding to that file shall consist of only a <newline> character.
# paste > Merge lines of files. More information: > https://www.gnu.org/software/coreutils/paste. * Join all the lines into a single line, using TAB as delimiter: `paste -s {{path/to/file}}` * Join all the lines into a single line, using the specified delimiter: `paste -s -d {{delimiter}} {{path/to/file}}` * Merge two files side by side, each in its column, using TAB as delimiter: `paste {{file1}} {{file2}}` * Merge two files side by side, each in its column, using the specified delimiter: `paste -d {{delimiter}} {{file1}} {{file2}}` * Merge two files, with lines added alternatively: `paste -d '\n' {{file1}} {{file2}}`
busctl
busctl may be used to introspect and monitor the D-Bus bus. The following options are understood: --address=ADDRESS Connect to the bus specified by ADDRESS instead of using suitable defaults for either the system or user bus (see --system and --user options). --show-machine When showing the list of peers, show a column containing the names of containers they belong to. See systemd-machined.service(8). --unique When showing the list of peers, show only "unique" names (of the form ":number.number"). --acquired The opposite of --unique — only "well-known" names will be shown. --activatable When showing the list of peers, show only peers which have actually not been activated yet, but may be started automatically if accessed. --match=MATCH When showing messages being exchanged, show only the subset matching MATCH. See sd_bus_add_match(3). --size= When used with the capture command, specifies the maximum bus message size to capture ("snaplen"). Defaults to 4096 bytes. --list When used with the tree command, shows a flat list of object paths instead of a tree. -q, --quiet When used with the call command, suppresses display of the response message payload. Note that even if this option is specified, errors returned will still be printed and the tool will indicate success or failure with the process exit code. --verbose When used with the call or get-property command, shows output in a more verbose format. --xml-interface When used with the introspect call, dump the XML description received from the D-Bus org.freedesktop.DBus.Introspectable.Introspect call instead of the normal output. --json=MODE When used with the call or get-property command, shows output formatted as JSON. Expects one of "short" (for the shortest possible output without any redundant whitespace or line breaks) or "pretty" (for a pretty version of the same, with indentation and line breaks). Note that transformation from D-Bus marshalling to JSON is done in a loss-less way, which means type information is embedded into the JSON object tree. -j Equivalent to --json=pretty when invoked interactively from a terminal. Otherwise equivalent to --json=short, in particular when the output is piped to some other program. --expect-reply=BOOL When used with the call command, specifies whether busctl shall wait for completion of the method call, output the returned method response data, and return success or failure via the process exit code. If this is set to "no", the method call will be issued but no response is expected, the tool terminates immediately, and thus no response can be shown, and no success or failure is returned via the exit code. To only suppress output of the reply message payload, use --quiet above. Defaults to "yes". --auto-start=BOOL When used with the call or emit command, specifies whether the method call should implicitly activate the called service, should it not be running yet but is configured to be auto-started. Defaults to "yes". --allow-interactive-authorization=BOOL When used with the call command, specifies whether the services may enforce interactive authorization while executing the operation, if the security policy is configured for this. Defaults to "yes". --timeout=SECS When used with the call command, specifies the maximum time to wait for method call completion. If no time unit is specified, assumes seconds. The usual other units are understood, too (ms, us, s, min, h, d, w, month, y). Note that this timeout does not apply if --expect-reply=no is used, as the tool does not wait for any reply message then. When not specified or when set to 0, the default of "25s" is assumed. --augment-creds=BOOL Controls whether credential data reported by list or status shall be augmented with data from /proc/. When this is turned on, the data shown is possibly inconsistent, as the data read from /proc/ might be more recent than the rest of the credential information. Defaults to "yes". --watch-bind=BOOL Controls whether to wait for the specified AF_UNIX bus socket to appear in the file system before connecting to it. Defaults to off. When enabled, the tool will watch the file system until the socket is created and then connect to it. --destination=SERVICE Takes a service name. When used with the emit command, a signal is emitted to the specified service. --user Talk to the service manager of the calling user, rather than the service manager of the system. --system Talk to the service manager of the system. This is the implied default. -H, --host= Execute the operation remotely. Specify a hostname, or a username and hostname separated by "@", to connect to. The hostname may optionally be suffixed by a port ssh is listening on, separated by ":", and then a container name, separated by "/", which connects directly to a specific container on the specified host. This will use SSH to talk to the remote machine manager instance. Container names may be enumerated with machinectl -H HOST. Put IPv6 addresses in brackets. -M, --machine= Execute operation on a local container. Specify a container name to connect to, optionally prefixed by a user name to connect as and a separating "@" character. If the special string ".host" is used in place of the container name, a connection to the local system is made (which is useful to connect to a specific user's user bus: "--user --machine=lennart@.host"). If the "@" syntax is not used, the connection is made as root user. If the "@" syntax is used either the left hand side or the right hand side may be omitted (but not both) in which case the local user name and ".host" are implied. -l, --full Do not ellipsize the output in list command. --no-pager Do not pipe output into a pager. --no-legend Do not print the legend, i.e. column headers and the footer with hints. -h, --help Print a short help text and exit. --version Print a short version string and exit.
# busctl > Introspect and monitor the D-Bus bus. More information: > https://www.freedesktop.org/software/systemd/man/busctl.html. * Show all peers on the bus, by their service names: `busctl list` * Show process information and credentials of a bus service, a process, or the owner of the bus (if no parameter is specified): `busctl status {{service|pid}}` * Dump messages being exchanged. If no service is specified, show all messages on the bus: `busctl monitor {{service1 service2 ...}}` * Show an object tree of one or more services (or all services if no service is specified): `busctl tree {{service1 service2 ...}}` * Show interfaces, methods, properties and signals of the specified object on the specified service: `busctl introspect {{service}} {{path/to/object}}` * Retrieve the current value of one or more object properties: `busctl get-property {{service}} {{path/to/object}} {{interface_name}} {{property_name}}` * Invoke a method and show the response: `busctl call {{service}} {{path/to/object}} {{interface_name}} {{method_name}}`
readlink
Note realpath(1) is the preferred command to use for canonicalization functionality. Print value of a symbolic link or canonical file name -f, --canonicalize canonicalize by following every symlink in every component of the given name recursively; all but the last component must exist -e, --canonicalize-existing canonicalize by following every symlink in every component of the given name recursively, all components must exist -m, --canonicalize-missing canonicalize by following every symlink in every component of the given name recursively, without requirements on components existence -n, --no-newline do not output the trailing delimiter -q, --quiet -s, --silent suppress most error messages (on by default) -v, --verbose report error messages -z, --zero end each output line with NUL, not newline --help display this help and exit --version output version information and exit
# readlink > Follow symlinks and get symlink information. More information: > https://www.gnu.org/software/coreutils/readlink. * Print the absolute path which the symlink points to: `readlink {{path/to/symlink_file}}`
sh
The sh utility is a command language interpreter that shall execute commands read from a command line string, the standard input, or a specified file. The application shall ensure that the commands to be executed are expressed in the language described in Chapter 2, Shell Command Language. Pathname expansion shall not fail due to the size of a file. Shell input and output redirections have an implementation- defined offset maximum that is established in the open file description. The sh utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines, with an extension for support of a leading <plus-sign> ('+') as noted below. The -a, -b, -C, -e, -f, -m, -n, -o option, -u, -v, and -x options are described as part of the set utility in Section 2.14, Special Built-In Utilities. The option letters derived from the set special built-in shall also be accepted with a leading <plus- sign> ('+') instead of a leading <hyphen-minus> (meaning the reverse case of the option as described in this volume of POSIX.1‐2017). The following additional options shall be supported: -c Read commands from the command_string operand. Set the value of special parameter 0 (see Section 2.5.2, Special Parameters) from the value of the command_name operand and the positional parameters ($1, $2, and so on) in sequence from the remaining argument operands. No commands shall be read from the standard input. -i Specify that the shell is interactive; see below. An implementation may treat specifying the -i option as an error if the real user ID of the calling process does not equal the effective user ID or if the real group ID does not equal the effective group ID. -s Read commands from the standard input. If there are no operands and the -c option is not specified, the -s option shall be assumed. If the -i option is present, or if there are no operands and the shell's standard input and standard error are attached to a terminal, the shell is considered to be interactive.
# sh > Bourne shell, the standard command language interpreter. See also > `histexpand` for history expansion. More information: https://manned.org/sh. * Start an interactive shell session: `sh` * Execute a command and then exit: `sh -c "{{command}}"` * Execute a script: `sh {{path/to/script.sh}}` * Read and execute commands from `stdin`: `sh -s`
mpstat
The mpstat command writes to standard output activities for each available processor, processor 0 being the first one. Global average activities among all processors are also reported. The mpstat command can be used on both SMP and UP machines, but in the latter, only global average activities will be printed. If no activity has been selected, then the default report is the CPU utilization report. The interval parameter specifies the amount of time in seconds between each report. A value of 0 (or no parameters at all) indicates that processors statistics are to be reported for the time since system startup (boot). The count parameter can be specified in conjunction with the interval parameter if this one is not set to zero. The value of count determines the number of reports generated at interval seconds apart. If the interval parameter is specified without the count parameter, the mpstat command generates reports continuously. -A This option is equivalent to specifying -n -u -I ALL. This option also implies specifying -N ALL -P ALL unless these options are explicitly set on the command line. --dec={ 0 | 1 | 2 } Specify the number of decimal places to use (0 to 2, default value is 2). -H Also detect and display statistics for physically hotplugged vCPUs. -I { keyword[,...] | ALL } Report interrupts statistics. Possible keywords are CPU, SCPU, and SUM. With the CPU keyword, the number of each individual interrupt received per second by the CPU or CPUs is displayed. Interrupts are those listed in /proc/interrupts file. With the SCPU keyword, the number of each individual software interrupt received per second by the CPU or CPUs is displayed. This option works only with kernels 2.6.31 and later. Software interrupts are those listed in /proc/softirqs file. With the SUM keyword, the mpstat command reports the total number of interrupts per processor. The following values are displayed: CPU Processor number. The keyword all indicates that statistics are calculated as averages among all processors. intr/s Show the total number of interrupts received per second by the CPU or CPUs. The ALL keyword is equivalent to specifying all the keywords above and therefore all the interrupts statistics are displayed. -N { node_list | ALL } Indicate the NUMA nodes for which statistics are to be reported. node_list is a list of comma-separated values or range of values (e.g., 0,2,4-7,12-). Note that node all is the global average among all nodes. The ALL keyword indicates that statistics are to be reported for all nodes. -n Report summary CPU statistics based on NUMA node placement. The following values are displayed: NODE Logical NUMA node number. The keyword all indicates that statistics are calculated as averages among all nodes. All the other fields are the same as those displayed with option -u (see below). -o JSON Display the statistics in JSON (JavaScript Object Notation) format. JSON output field order is undefined, and new fields may be added in the future. -P { cpu_list | ALL } Indicate the processors for which statistics are to be reported. cpu_list is a list of comma-separated values or range of values (e.g., 0,2,4-7,12-). Note that processor 0 is the first processor, and processor all is the global average among all processors. The ALL keyword indicates that statistics are to be reported for all processors. Offline processors are not displayed. -T Display topology elements in the CPU report (see option -u below). The following elements are displayed: CORE Logical core number. SOCK Logical socket number. NODE Logical NUMA node number. -u Report CPU utilization. The following values are displayed: CPU Processor number. The keyword all indicates that statistics are calculated as averages among all processors. %usr Show the percentage of CPU utilization that occurred while executing at the user level (application). %nice Show the percentage of CPU utilization that occurred while executing at the user level with nice priority. %sys Show the percentage of CPU utilization that occurred while executing at the system level (kernel). Note that this does not include time spent servicing hardware and software interrupts. %iowait Show the percentage of time that the CPU or CPUs were idle during which the system had an outstanding disk I/O request. %irq Show the percentage of time spent by the CPU or CPUs to service hardware interrupts. %soft Show the percentage of time spent by the CPU or CPUs to service software interrupts. %steal Show the percentage of time spent in involuntary wait by the virtual CPU or CPUs while the hypervisor was servicing another virtual processor. %guest Show the percentage of time spent by the CPU or CPUs to run a virtual processor. %gnice Show the percentage of time spent by the CPU or CPUs to run a niced guest. %idle Show the percentage of time that the CPU or CPUs were idle and the system did not have an outstanding disk I/O request. -V Print version number then exit.
# mpstat > Report CPU statistics. More information: https://manned.org/mpstat. * Display CPU statistics every 2 seconds: `mpstat {{2}}` * Display 5 reports, one by one, at 2 second intervals: `mpstat {{2}} {{5}}` * Display 5 reports, one by one, from a given processor, at 2 second intervals: `mpstat -P {{0}} {{2}} {{5}}`
nm
The nm utility shall display symbolic information appearing in the object file, executable file, or object-file library named by file. If no symbolic information is available for a valid input file, the nm utility shall report that fact, but not consider it an error condition. The default base used when numeric values are written is unspecified. On XSI-conformant systems, it shall be decimal if the -P option is not specified. The nm utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following options shall be supported: -A Write the full pathname or library name of an object on each line. -e Write only external (global) and static symbol information. -f Produce full output. Write redundant symbols (.text, .data, and .bss), normally suppressed. -g Write only external (global) symbol information. -o Write numeric values in octal (equivalent to -t o). -P Write information in a portable output format, as specified in the STDOUT section. -t format Write each numeric value in the specified format. The format shall be dependent on the single character used as the format option-argument: d decimal (default if -P is not specified). o octal. x hexadecimal (default if -P is specified). -u Write only undefined symbols. -v Sort output by value instead of by symbol name. -x Write numeric values in hexadecimal (equivalent to -t x).
# nm > List symbol names in object files. More information: https://manned.org/nm. * List global (extern) functions in a file (prefixed with T): `nm -g {{path/to/file.o}}` * List only undefined symbols in a file: `nm -u {{path/to/file.o}}` * List all symbols, even debugging symbols: `nm -a {{path/to/file.o}}` * Demangle C++ symbols (make them readable): `nm --demangle {{path/to/file.o}}`
logger
The logger utility saves a message, in an unspecified manner and format, containing the string operands provided by the user. The messages are expected to be evaluated later by personnel performing system administration tasks. It is implementation-defined whether messages written in locales other than the POSIX locale are effective. None.
# logger > Add messages to syslog (/var/log/syslog). More information: > https://manned.org/logger. * Log a message to syslog: `logger {{message}}` * Take input from `stdin` and log to syslog: `echo {{log_entry}} | logger` * Send the output to a remote syslog server running at a given port. Default port is 514: `echo {{log_entry}} | logger --server {{hostname}} --port {{port}}` * Use a specific tag for every line logged. Default is the name of logged in user: `echo {{log_entry}} | logger --tag {{tag}}` * Log messages with a given priority. Default is `user.notice`. See `man logger` for all priority options: `echo {{log_entry}} | logger --priority {{user.warning}}`
fallocate
fallocate is used to manipulate the allocated disk space for a file, either to deallocate or preallocate it. For filesystems which support the fallocate(2) system call, preallocation is done quickly by allocating blocks and marking them as uninitialized, requiring no IO to the data blocks. This is much faster than creating a file by filling it with zeroes. The exit status returned by fallocate is 0 on success and 1 on failure. The length and offset arguments may be followed by the multiplicative suffixes KiB (=1024), MiB (=1024*1024), and so on for GiB, TiB, PiB, EiB, ZiB, and YiB (the "iB" is optional, e.g., "K" has the same meaning as "KiB") or the suffixes KB (=1000), MB (=1000*1000), and so on for GB, TB, PB, EB, ZB, and YB. The options --collapse-range, --dig-holes, --punch-hole, and --zero-range are mutually exclusive. -c, --collapse-range Removes a byte range from a file, without leaving a hole. The byte range to be collapsed starts at offset and continues for length bytes. At the completion of the operation, the contents of the file starting at the location offset+length will be appended at the location offset, and the file will be length bytes smaller. The option --keep-size may not be specified for the collapse-range operation. Available since Linux 3.15 for ext4 (only for extent-based files) and XFS. A filesystem may place limitations on the granularity of the operation, in order to ensure efficient implementation. Typically, offset and length must be a multiple of the filesystem logical block size, which varies according to the filesystem type and configuration. If a filesystem has such a requirement, the operation will fail with the error EINVAL if this requirement is violated. -d, --dig-holes Detect and dig holes. This makes the file sparse in-place, without using extra disk space. The minimum size of the hole depends on filesystem I/O block size (usually 4096 bytes). Also, when using this option, --keep-size is implied. If no range is specified by --offset and --length, then the entire file is analyzed for holes. You can think of this option as doing a "cp --sparse" and then renaming the destination file to the original, without the need for extra disk space. See --punch-hole for a list of supported filesystems. -i, --insert-range Insert a hole of length bytes from offset, shifting existing data. -l, --length length Specifies the length of the range, in bytes. -n, --keep-size Do not modify the apparent length of the file. This may effectively allocate blocks past EOF, which can be removed with a truncate. -o, --offset offset Specifies the beginning offset of the range, in bytes. -p, --punch-hole Deallocates space (i.e., creates a hole) in the byte range starting at offset and continuing for length bytes. Within the specified range, partial filesystem blocks are zeroed, and whole filesystem blocks are removed from the file. After a successful call, subsequent reads from this range will return zeroes. This option may not be specified at the same time as the --zero-range option. Also, when using this option, --keep-size is implied. Supported for XFS (since Linux 2.6.38), ext4 (since Linux 3.0), Btrfs (since Linux 3.7), tmpfs (since Linux 3.5) and gfs2 (since Linux 4.16). -v, --verbose Enable verbose mode. -x, --posix Enable POSIX operation mode. In that mode allocation operation always completes, but it may take longer time when fast allocation is not supported by the underlying filesystem. -z, --zero-range Zeroes space in the byte range starting at offset and continuing for length bytes. Within the specified range, blocks are preallocated for the regions that span the holes in the file. After a successful call, subsequent reads from this range will return zeroes. Zeroing is done within the filesystem preferably by converting the range into unwritten extents. This approach means that the specified range will not be physically zeroed out on the device (except for partial blocks at the either end of the range), and I/O is (otherwise) required only to update metadata. Option --keep-size can be specified to prevent file length modification. Available since Linux 3.14 for ext4 (only for extent-based files) and XFS. -h, --help Display help text and exit. -V, --version Print version and exit.
# fallocate > Reserve or deallocate disk space to files. The utility allocates space > without zeroing. More information: https://manned.org/fallocate. * Reserve a file taking up 700 MiB of disk space: `fallocate --length {{700M}} {{path/to/file}}` * Shrink an already allocated file by 200 MiB: `fallocate --collapse-range --length {{200M}} {{path/to/file}}` * Shrink 20 MB of space after 100 MiB in a file: `fallocate --collapse-range --offset {{100M}} --length {{20M}} {{path/to/file}}`
mkfifo
Create named pipes (FIFOs) with the given NAMEs. Mandatory arguments to long options are mandatory for short options too. -m, --mode=MODE set file permission bits to MODE, not a=rw - umask -Z set the SELinux security context to default type --context[=CTX] like -Z, or if CTX is specified then set the SELinux or SMACK security context to CTX --help display this help and exit --version output version information and exit
# mkfifo > Makes FIFOs (named pipes). More information: > https://www.gnu.org/software/coreutils/mkfifo. * Create a named pipe at a given path: `mkfifo {{path/to/pipe}}`
git-credential-store
Note Using this helper will store your passwords unencrypted on disk, protected only by filesystem permissions. If this is not an acceptable security tradeoff, try git-credential-cache(1), or find a helper that integrates with secure storage provided by your operating system. This command stores credentials indefinitely on disk for use by future Git programs. You probably don’t want to invoke this command directly; it is meant to be used as a credential helper by other parts of git. See gitcredentials(7) or EXAMPLES below. --file=<path> Use <path> to lookup and store credentials. The file will have its filesystem permissions set to prevent other users on the system from reading it, but will not be encrypted or otherwise protected. If not specified, credentials will be searched for from ~/.git-credentials and $XDG_CONFIG_HOME/git/credentials, and credentials will be written to ~/.git-credentials if it exists, or $XDG_CONFIG_HOME/git/credentials if it exists and the former does not. See also the section called “FILES”.
# git credential-store > `git` helper to store passwords on disk. More information: https://git- > scm.com/docs/git-credential-store. * Store Git credentials in a specific file: `git config credential.helper 'store --file={{path/to/file}}'`
kill
The default signal for kill is TERM. Use -l or -L to list available signals. Particularly useful signals include HUP, INT, KILL, STOP, CONT, and 0. Alternate signals may be specified in three ways: -9, -SIGKILL or -KILL. Negative PID values may be used to choose whole process groups; see the PGID column in ps command output. A PID of -1 is special; it indicates all processes except the kill process itself and init. <pid> [...] Send signal to every <pid> listed. -<signal> -s <signal> --signal <signal> Specify the signal to be sent. The signal can be specified by using name or number. The behavior of signals is explained in signal(7) manual page. -q, --queue value Use sigqueue(3) rather than kill(2) and the value argument is used to specify an integer to be sent with the signal. If the receiving process has installed a handler for this signal using the SA_SIGINFO flag to sigaction(2), then it can obtain this data via the si_value field of the siginfo_t structure. -l, --list [signal] List signal names. This option has optional argument, which will convert signal number to signal name, or other way round. -L, --table List signal names in a nice table.
# kill > Sends a signal to a process, usually related to stopping the process. All > signals except for SIGKILL and SIGSTOP can be intercepted by the process to > perform a clean exit. More information: https://manned.org/kill. * Terminate a program using the default SIGTERM (terminate) signal: `kill {{process_id}}` * List available signal names (to be used without the `SIG` prefix): `kill -l` * Terminate a background job: `kill %{{job_id}}` * Terminate a program using the SIGHUP (hang up) signal. Many daemons will reload instead of terminating: `kill -{{1|HUP}} {{process_id}}` * Terminate a program using the SIGINT (interrupt) signal. This is typically initiated by the user pressing `Ctrl + C`: `kill -{{2|INT}} {{process_id}}` * Signal the operating system to immediately terminate a program (which gets no chance to capture the signal): `kill -{{9|KILL}} {{process_id}}` * Signal the operating system to pause a program until a SIGCONT ("continue") signal is received: `kill -{{17|STOP}} {{process_id}}` * Send a `SIGUSR1` signal to all processes with the given GID (group id): `kill -{{SIGUSR1}} -{{group_id}}`
exec
The exec utility shall open, close, and/or copy file descriptors as specified by any redirections as part of the command. If exec is specified without command or arguments, and any file descriptors with numbers greater than 2 are opened with associated redirection statements, it is unspecified whether those file descriptors remain open when the shell invokes another utility. Scripts concerned that child shells could misuse open file descriptors can always close them explicitly, as shown in one of the following examples. If exec is specified with command, it shall replace the shell with command without creating a new process. If arguments are specified, they shall be arguments to command. Redirection affects the current shell execution environment. None.
# exec > Replace the current process with another process. More information: > https://linuxcommand.org/lc3_man_pages/exech.html. * Replace with the specified command using the current environment variables: `exec {{command -with -flags}}` * Replace with the specified command, clearing environment variables: `exec -c {{command -with -flags}}` * Replace with the specified command and login using the default shell: `exec -l {{command -with -flags}}` * Replace with the specified command and change the process name: `exec -a {{process_name}} {{command -with -flags}}`
ln
In the first synopsis form, the ln utility shall create a new directory entry (link) at the destination path specified by the target_file operand. If the -s option is specified, a symbolic link shall be created for the file specified by the source_file operand. This first synopsis form shall be assumed when the final operand does not name an existing directory; if more than two operands are specified and the final is not an existing directory, an error shall result. In the second synopsis form, the ln utility shall create a new directory entry (link), or if the -s option is specified a symbolic link, for each file specified by a source_file operand, at a destination path in the existing directory named by target_dir. If the last operand specifies an existing file of a type not specified by the System Interfaces volume of POSIX.1‐2017, the behavior is implementation-defined. The corresponding destination path for each source_file shall be the concatenation of the target directory pathname, a <slash> character if the target directory pathname did not end in a <slash>, and the last pathname component of the source_file. The second synopsis form shall be assumed when the final operand names an existing directory. For each source_file: 1. If the destination path exists and was created by a previous step, it is unspecified whether ln shall write a diagnostic message to standard error, do nothing more with the current source_file, and go on to any remaining source_files; or will continue processing the current source_file. If the destination path exists: a. If the -f option is not specified, ln shall write a diagnostic message to standard error, do nothing more with the current source_file, and go on to any remaining source_files. b. If the destination path names the same directory entry as the current source_file ln shall write a diagnostic message to standard error, do nothing more with the current source_file, and go on to any remaining source_files. c. Actions shall be performed equivalent to the unlink() function defined in the System Interfaces volume of POSIX.1‐2017, called using the destination path as the path argument. If this fails for any reason, ln shall write a diagnostic message to standard error, do nothing more with the current source_file, and go on to any remaining source_files. 2. If the -s option is specified, actions shall be performed equivalent to the symlink() function with source_file as the path1 argument and the destination path as the path2 argument. The ln utility shall do nothing more with source_file and shall go on to any remaining files. 3. If source_file is a symbolic link: a. If the -P option is in effect, actions shall be performed equivalent to the linkat() function with source_file as the path1 argument, the destination path as the path2 argument, AT_FDCWD as the fd1 and fd2 arguments, and zero as the flag argument. b. If the -L option is in effect, actions shall be performed equivalent to the linkat() function with source_file as the path1 argument, the destination path as the path2 argument, AT_FDCWD as the fd1 and fd2 arguments, and AT_SYMLINK_FOLLOW as the flag argument. The ln utility shall do nothing more with source_file and shall go on to any remaining files. 4. Actions shall be performed equivalent to the link() function defined in the System Interfaces volume of POSIX.1‐2017 using source_file as the path1 argument, and the destination path as the path2 argument. The ln utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following options shall be supported: -f Force existing destination pathnames to be removed to allow the link. -L For each source_file operand that names a file of type symbolic link, create a (hard) link to the file referenced by the symbolic link. -P For each source_file operand that names a file of type symbolic link, create a (hard) link to the symbolic link itself. -s Create symbolic links instead of hard links. If the -s option is specified, the -L and -P options shall be silently ignored. Specifying more than one of the mutually-exclusive options -L and -P shall not be considered an error. The last option specified shall determine the behavior of the utility (unless the -s option causes it to be ignored). If the -s option is not specified and neither a -L nor a -P option is specified, it is implementation-defined which of the -L and -P options will be used as the default.
# ln > Creates links to files and directories. More information: > https://www.gnu.org/software/coreutils/ln. * Create a symbolic link to a file or directory: `ln -s {{/path/to/file_or_directory}} {{path/to/symlink}}` * Overwrite an existing symbolic link to point to a different file: `ln -sf {{/path/to/new_file}} {{path/to/symlink}}` * Create a hard link to a file: `ln {{/path/to/file}} {{path/to/hardlink}}`
sha224sum
Print or check SHA224 (224-bit) checksums. With no FILE, or when FILE is -, read standard input. -b, --binary read in binary mode -c, --check read checksums from the FILEs and check them --tag create a BSD-style checksum -t, --text read in text mode (default) -z, --zero end each output line with NUL, not newline, and disable file name escaping The following five options are useful only when verifying checksums: --ignore-missing don't fail or report status for missing files --quiet don't print OK for each successfully verified file --status don't output anything, status code shows success --strict exit non-zero for improperly formatted checksum lines -w, --warn warn about improperly formatted checksum lines --help display this help and exit --version output version information and exit The sums are computed as described in RFC 3874. When checking, the input should be a former output of this program. The default mode is to print a line with: checksum, a space, a character indicating input mode ('*' for binary, ' ' for text or where binary is insignificant), and name for each FILE. Note: There is no difference between binary mode and text mode on GNU systems.
# sha224sum > Calculate SHA224 cryptographic checksums. More information: > https://www.gnu.org/software/coreutils/manual/html_node/sha2-utilities.html. * Calculate the SHA224 checksum for one or more files: `sha224sum {{path/to/file1 path/to/file2 ...}}` * Calculate and save the list of SHA224 checksums to a file: `sha224sum {{path/to/file1 path/to/file2 ...}} > {{path/to/file.sha224}}` * Calculate a SHA224 checksum from `stdin`: `{{command}} | sha224sum` * Read a file of SHA224 sums and filenames and verify all files have matching checksums: `sha224sum --check {{path/to/file.sha224}}` * Only show a message for missing files or when verification fails: `sha224sum --check --quiet {{path/to/file.sha224}}` * Only show a message when verification fails, ignoring missing files: `sha224sum --ignore-missing --check --quiet {{path/to/file.sha224}}`
tr
The tr utility shall copy the standard input to the standard output with substitution or deletion of selected characters. The options specified and the string1 and string2 operands shall control translations that occur while copying characters and single-character collating elements. The tr utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following options shall be supported: -c Complement the set of values specified by string1. See the EXTENDED DESCRIPTION section. -C Complement the set of characters specified by string1. See the EXTENDED DESCRIPTION section. -d Delete all occurrences of input characters that are specified by string1. -s Replace instances of repeated characters with a single character, as described in the EXTENDED DESCRIPTION section.
# tr > Translate characters: run replacements based on single characters and > character sets. More information: https://www.gnu.org/software/coreutils/tr. * Replace all occurrences of a character in a file, and print the result: `tr {{find_character}} {{replace_character}} < {{path/to/file}}` * Replace all occurrences of a character from another command's output: `echo {{text}} | tr {{find_character}} {{replace_character}}` * Map each character of the first set to the corresponding character of the second set: `tr '{{abcd}}' '{{jkmn}}' < {{path/to/file}}` * Delete all occurrences of the specified set of characters from the input: `tr -d '{{input_characters}}' < {{path/to/file}}` * Compress a series of identical characters to a single character: `tr -s '{{input_characters}}' < {{path/to/file}}` * Translate the contents of a file to upper-case: `tr "[:lower:]" "[:upper:]" < {{path/to/file}}` * Strip out non-printable characters from a file: `tr -cd "[:print:]" < {{path/to/file}}`
chattr
chattr changes the file attributes on a Linux file system. The format of a symbolic mode is +-=[aAcCdDeFijmPsStTux]. The operator '+' causes the selected attributes to be added to the existing attributes of the files; '-' causes them to be removed; and '=' causes them to be the only attributes that the files have. The letters 'aAcCdDeFijmPsStTux' select the new attributes for the files: append only (a), no atime updates (A), compressed (c), no copy on write (C), no dump (d), synchronous directory updates (D), extent format (e), case-insensitive directory lookups (F), immutable (i), data journaling (j), don't compress (m), project hierarchy (P), secure deletion (s), synchronous updates (S), no tail-merging (t), top of directory hierarchy (T), undeletable (u), and direct access for files (x). The following attributes are read-only, and may be listed by lsattr(1) but not modified by chattr: encrypted (E), indexed directory (I), inline data (N), and verity (V). Not all flags are supported or utilized by all file systems; refer to file system-specific man pages such as btrfs(5), ext4(5), mkfs.f2fs(8), and xfs(5) for more file system-specific details. -R Recursively change attributes of directories and their contents. -V Be verbose with chattr's output and print the program version. -f Suppress most error messages. -v version Set the file's version/generation number. -p project Set the file's project number.
# chattr > Change attributes of files or directories. More information: > https://manned.org/chattr. * Make a file or directory immutable to changes and deletion, even by superuser: `chattr +i {{path/to/file_or_directory}}` * Make a file or directory mutable: `chattr -i {{path/to/file_or_directory}}` * Recursively make an entire directory and contents immutable: `chattr -R +i {{path/to/directory}}`
git-reset
In the first three forms, copy entries from <tree-ish> to the index. In the last form, set the current branch head (HEAD) to <commit>, optionally modifying index and working tree to match. The <tree-ish>/<commit> defaults to HEAD in all forms. git reset [-q] [<tree-ish>] [--] <pathspec>..., git reset [-q] [--pathspec-from-file=<file> [--pathspec-file-nul]] [<tree-ish>] These forms reset the index entries for all paths that match the <pathspec> to their state at <tree-ish>. (It does not affect the working tree or the current branch.) This means that git reset <pathspec> is the opposite of git add <pathspec>. This command is equivalent to git restore [--source=<tree-ish>] --staged <pathspec>.... After running git reset <pathspec> to update the index entry, you can use git-restore(1) to check the contents out of the index to the working tree. Alternatively, using git-restore(1) and specifying a commit with --source, you can copy the contents of a path out of a commit to the index and to the working tree in one go. git reset (--patch | -p) [<tree-ish>] [--] [<pathspec>...] Interactively select hunks in the difference between the index and <tree-ish> (defaults to HEAD). The chosen hunks are applied in reverse to the index. This means that git reset -p is the opposite of git add -p, i.e. you can use it to selectively reset hunks. See the “Interactive Mode” section of git-add(1) to learn how to operate the --patch mode. git reset [<mode>] [<commit>] This form resets the current branch head to <commit> and possibly updates the index (resetting it to the tree of <commit>) and the working tree depending on <mode>. Before the operation, ORIG_HEAD is set to the tip of the current branch. If <mode> is omitted, defaults to --mixed. The <mode> must be one of the following: --soft Does not touch the index file or the working tree at all (but resets the head to <commit>, just like all modes do). This leaves all your changed files "Changes to be committed", as git status would put it. --mixed Resets the index but not the working tree (i.e., the changed files are preserved but not marked for commit) and reports what has not been updated. This is the default action. If -N is specified, removed paths are marked as intent-to-add (see git-add(1)). --hard Resets the index and working tree. Any changes to tracked files in the working tree since <commit> are discarded. Any untracked files or directories in the way of writing any tracked files are simply deleted. --merge Resets the index and updates the files in the working tree that are different between <commit> and HEAD, but keeps those which are different between the index and working tree (i.e. which have changes which have not been added). If a file that is different between <commit> and the index has unstaged changes, reset is aborted. In other words, --merge does something like a git read-tree -u -m <commit>, but carries forward unmerged index entries. --keep Resets index entries and updates files in the working tree that are different between <commit> and HEAD. If a file that is different between <commit> and HEAD has local changes, reset is aborted. --[no-]recurse-submodules When the working tree is updated, using --recurse-submodules will also recursively reset the working tree of all active submodules according to the commit recorded in the superproject, also setting the submodules' HEAD to be detached at that commit. See "Reset, restore and revert" in git(1) for the differences between the three commands. -q, --quiet Be quiet, only report errors. --refresh, --no-refresh Refresh the index after a mixed reset. Enabled by default. --pathspec-from-file=<file> Pathspec is passed in <file> instead of commandline args. If <file> is exactly - then standard input is used. Pathspec elements are separated by LF or CR/LF. Pathspec elements can be quoted as explained for the configuration variable core.quotePath (see git-config(1)). See also --pathspec-file-nul and global --literal-pathspecs. --pathspec-file-nul Only meaningful with --pathspec-from-file. Pathspec elements are separated with NUL character and all other characters are taken literally (including newlines and quotes). -- Do not interpret any more arguments as options. <pathspec>... Limits the paths affected by the operation. For more details, see the pathspec entry in gitglossary(7).
# git reset > Undo commits or unstage changes, by resetting the current Git HEAD to the > specified state. If a path is passed, it works as "unstage"; if a commit > hash or branch is passed, it works as "uncommit". More information: > https://git-scm.com/docs/git-reset. * Unstage everything: `git reset` * Unstage specific file(s): `git reset {{path/to/file1 path/to/file2 ...}}` * Interactively unstage portions of a file: `git reset --patch {{path/to/file}}` * Undo the last commit, keeping its changes (and any further uncommitted changes) in the filesystem: `git reset HEAD~` * Undo the last two commits, adding their changes to the index, i.e. staged for commit: `git reset --soft HEAD~2` * Discard any uncommitted changes, staged or not (for only unstaged changes, use `git checkout`): `git reset --hard` * Reset the repository to a given commit, discarding committed, staged and uncommitted changes since then: `git reset --hard {{commit}}`
uuidgen
The uuidgen program creates (and prints) a new universally unique identifier (UUID) using the libuuid(3) library. The new UUID can reasonably be considered unique among all UUIDs created on the local system, and among UUIDs created on other systems in the past and in the future. There are three types of UUIDs which uuidgen can generate: time-based UUIDs, random-based UUIDs, and hash-based UUIDs. By default uuidgen will generate a random-based UUID if a high-quality random number generator is present. Otherwise, it will choose a time-based UUID. It is possible to force the generation of one of these first two UUID types by using the --random or --time options. The third type of UUID is generated with the --md5 or --sha1 options, followed by --namespace namespace and --name name. The namespace may either be a well-known UUID, or else an alias to one of the well-known UUIDs defined in RFC 4122, that is @dns, @url, @oid, or @x500. The name is an arbitrary string value. The generated UUID is the digest of the concatenation of the namespace UUID and the name value, hashed with the MD5 or SHA1 algorithms. It is, therefore, a predictable value which may be useful when UUIDs are being used as handles or nonces for more complex values or values which shouldn’t be disclosed directly. See the RFC for more information. -r, --random Generate a random-based UUID. This method creates a UUID consisting mostly of random bits. It requires that the operating system has a high quality random number generator, such as /dev/random. -t, --time Generate a time-based UUID. This method creates a UUID based on the system clock plus the system’s ethernet hardware address, if present. -h, --help Display help text and exit. -V, --version Print version and exit. -m, --md5 Use MD5 as the hash algorithm. -s, --sha1 Use SHA1 as the hash algorithm. -n, --namespace namespace Generate the hash with the namespace prefix. The namespace is UUID, or '@ns' where "ns" is well-known predefined UUID addressed by namespace name (see above). -N, --name name Generate the hash of the name. -x, --hex Interpret name name as a hexadecimal string.
# uuidgen > Generate new UUID (Universally Unique IDentifier) strings. More information: > https://www.ss64.com/osx/uuidgen.html. * Generate a UUID string: `uuidgen`
git-clone
Clones a repository into a newly created directory, creates remote-tracking branches for each branch in the cloned repository (visible using git branch --remotes), and creates and checks out an initial branch that is forked from the cloned repository’s currently active branch. After the clone, a plain git fetch without arguments will update all the remote-tracking branches, and a git pull without arguments will in addition merge the remote master branch into the current master branch, if any (this is untrue when "--single-branch" is given; see below). This default configuration is achieved by creating references to the remote branch heads under refs/remotes/origin and by initializing remote.origin.url and remote.origin.fetch configuration variables. -l, --local When the repository to clone from is on a local machine, this flag bypasses the normal "Git aware" transport mechanism and clones the repository by making a copy of HEAD and everything under objects and refs directories. The files under .git/objects/ directory are hardlinked to save space when possible. If the repository is specified as a local path (e.g., /path/to/repo), this is the default, and --local is essentially a no-op. If the repository is specified as a URL, then this flag is ignored (and we never use the local optimizations). Specifying --no-local will override the default when /path/to/repo is given, using the regular Git transport instead. If the repository’s $GIT_DIR/objects has symbolic links or is a symbolic link, the clone will fail. This is a security measure to prevent the unintentional copying of files by dereferencing the symbolic links. NOTE: this operation can race with concurrent modification to the source repository, similar to running cp -r src dst while modifying src. --no-hardlinks Force the cloning process from a repository on a local filesystem to copy the files under the .git/objects directory instead of using hardlinks. This may be desirable if you are trying to make a back-up of your repository. -s, --shared When the repository to clone is on the local machine, instead of using hard links, automatically setup .git/objects/info/alternates to share the objects with the source repository. The resulting repository starts out without any object of its own. NOTE: this is a possibly dangerous operation; do not use it unless you understand what it does. If you clone your repository using this option and then delete branches (or use any other Git command that makes any existing commit unreferenced) in the source repository, some objects may become unreferenced (or dangling). These objects may be removed by normal Git operations (such as git commit) which automatically call git maintenance run --auto. (See git-maintenance(1).) If these objects are removed and were referenced by the cloned repository, then the cloned repository will become corrupt. Note that running git repack without the --local option in a repository cloned with --shared will copy objects from the source repository into a pack in the cloned repository, removing the disk space savings of clone --shared. It is safe, however, to run git gc, which uses the --local option by default. If you want to break the dependency of a repository cloned with --shared on its source repository, you can simply run git repack -a to copy all objects from the source repository into a pack in the cloned repository. --reference[-if-able] <repository> If the reference repository is on the local machine, automatically setup .git/objects/info/alternates to obtain objects from the reference repository. Using an already existing repository as an alternate will require fewer objects to be copied from the repository being cloned, reducing network and local storage costs. When using the --reference-if-able, a non existing directory is skipped with a warning instead of aborting the clone. NOTE: see the NOTE for the --shared option, and also the --dissociate option. --dissociate Borrow the objects from reference repositories specified with the --reference options only to reduce network transfer, and stop borrowing from them after a clone is made by making necessary local copies of borrowed objects. This option can also be used when cloning locally from a repository that already borrows objects from another repository—the new repository will borrow objects from the same repository, and this option can be used to stop the borrowing. -q, --quiet Operate quietly. Progress is not reported to the standard error stream. -v, --verbose Run verbosely. Does not affect the reporting of progress status to the standard error stream. --progress Progress status is reported on the standard error stream by default when it is attached to a terminal, unless --quiet is specified. This flag forces progress status even if the standard error stream is not directed to a terminal. --server-option=<option> Transmit the given string to the server when communicating using protocol version 2. The given string must not contain a NUL or LF character. The server’s handling of server options, including unknown ones, is server-specific. When multiple --server-option=<option> are given, they are all sent to the other side in the order listed on the command line. -n, --no-checkout No checkout of HEAD is performed after the clone is complete. --[no-]reject-shallow Fail if the source repository is a shallow repository. The clone.rejectShallow configuration variable can be used to specify the default. --bare Make a bare Git repository. That is, instead of creating <directory> and placing the administrative files in <directory>/.git, make the <directory> itself the $GIT_DIR. This obviously implies the --no-checkout because there is nowhere to check out the working tree. Also the branch heads at the remote are copied directly to corresponding local branch heads, without mapping them to refs/remotes/origin/. When this option is used, neither remote-tracking branches nor the related configuration variables are created. --sparse Employ a sparse-checkout, with only files in the toplevel directory initially being present. The git-sparse-checkout(1) command can be used to grow the working directory as needed. --filter=<filter-spec> Use the partial clone feature and request that the server sends a subset of reachable objects according to a given object filter. When using --filter, the supplied <filter-spec> is used for the partial clone filter. For example, --filter=blob:none will filter out all blobs (file contents) until needed by Git. Also, --filter=blob:limit=<size> will filter out all blobs of size at least <size>. For more details on filter specifications, see the --filter option in git-rev-list(1). --also-filter-submodules Also apply the partial clone filter to any submodules in the repository. Requires --filter and --recurse-submodules. This can be turned on by default by setting the clone.filterSubmodules config option. --mirror Set up a mirror of the source repository. This implies --bare. Compared to --bare, --mirror not only maps local branches of the source to local branches of the target, it maps all refs (including remote-tracking branches, notes etc.) and sets up a refspec configuration such that all these refs are overwritten by a git remote update in the target repository. -o <name>, --origin <name> Instead of using the remote name origin to keep track of the upstream repository, use <name>. Overrides clone.defaultRemoteName from the config. -b <name>, --branch <name> Instead of pointing the newly created HEAD to the branch pointed to by the cloned repository’s HEAD, point to <name> branch instead. In a non-bare repository, this is the branch that will be checked out. --branch can also take tags and detaches the HEAD at that commit in the resulting repository. -u <upload-pack>, --upload-pack <upload-pack> When given, and the repository to clone from is accessed via ssh, this specifies a non-default path for the command run on the other end. --template=<template-directory> Specify the directory from which templates will be used; (See the "TEMPLATE DIRECTORY" section of git-init(1).) -c <key>=<value>, --config <key>=<value> Set a configuration variable in the newly-created repository; this takes effect immediately after the repository is initialized, but before the remote history is fetched or any files checked out. The key is in the same format as expected by git-config(1) (e.g., core.eol=true). If multiple values are given for the same key, each value will be written to the config file. This makes it safe, for example, to add additional fetch refspecs to the origin remote. Due to limitations of the current implementation, some configuration variables do not take effect until after the initial fetch and checkout. Configuration variables known to not take effect are: remote.<name>.mirror and remote.<name>.tagOpt. Use the corresponding --mirror and --no-tags options instead. --depth <depth> Create a shallow clone with a history truncated to the specified number of commits. Implies --single-branch unless --no-single-branch is given to fetch the histories near the tips of all branches. If you want to clone submodules shallowly, also pass --shallow-submodules. --shallow-since=<date> Create a shallow clone with a history after the specified time. --shallow-exclude=<revision> Create a shallow clone with a history, excluding commits reachable from a specified remote branch or tag. This option can be specified multiple times. --[no-]single-branch Clone only the history leading to the tip of a single branch, either specified by the --branch option or the primary branch remote’s HEAD points at. Further fetches into the resulting repository will only update the remote-tracking branch for the branch this option was used for the initial cloning. If the HEAD at the remote did not point at any branch when --single-branch clone was made, no remote-tracking branch is created. --no-tags Don’t clone any tags, and set remote.<remote>.tagOpt=--no-tags in the config, ensuring that future git pull and git fetch operations won’t follow any tags. Subsequent explicit tag fetches will still work, (see git-fetch(1)). Can be used in conjunction with --single-branch to clone and maintain a branch with no references other than a single cloned branch. This is useful e.g. to maintain minimal clones of the default branch of some repository for search indexing. --recurse-submodules[=<pathspec>] After the clone is created, initialize and clone submodules within based on the provided pathspec. If no pathspec is provided, all submodules are initialized and cloned. This option can be given multiple times for pathspecs consisting of multiple entries. The resulting clone has submodule.active set to the provided pathspec, or "." (meaning all submodules) if no pathspec is provided. Submodules are initialized and cloned using their default settings. This is equivalent to running git submodule update --init --recursive <pathspec> immediately after the clone is finished. This option is ignored if the cloned repository does not have a worktree/checkout (i.e. if any of --no-checkout/-n, --bare, or --mirror is given) --[no-]shallow-submodules All submodules which are cloned will be shallow with a depth of 1. --[no-]remote-submodules All submodules which are cloned will use the status of the submodule’s remote-tracking branch to update the submodule, rather than the superproject’s recorded SHA-1. Equivalent to passing --remote to git submodule update. --separate-git-dir=<git-dir> Instead of placing the cloned repository where it is supposed to be, place the cloned repository at the specified directory, then make a filesystem-agnostic Git symbolic link to there. The result is Git repository can be separated from working tree. -j <n>, --jobs <n> The number of submodules fetched at the same time. Defaults to the submodule.fetchJobs option. <repository> The (possibly remote) repository to clone from. See the GIT URLS section below for more information on specifying repositories. <directory> The name of a new directory to clone into. The "humanish" part of the source repository is used if no directory is explicitly given (repo for /path/to/repo.git and foo for host.xz:foo/.git). Cloning into an existing directory is only allowed if the directory is empty. --bundle-uri=<uri> Before fetching from the remote, fetch a bundle from the given <uri> and unbundle the data into the local repository. The refs in the bundle will be stored under the hidden refs/bundle/* namespace. This option is incompatible with --depth, --shallow-since, and --shallow-exclude.
# git clone > Clone an existing repository. More information: https://git- > scm.com/docs/git-clone. * Clone an existing repository into a new directory (the default directory is the repository name): `git clone {{remote_repository_location}} {{path/to/directory}}` * Clone an existing repository and its submodules: `git clone --recursive {{remote_repository_location}}` * Clone only the `.git` directory of an existing repository: `git clone --no-checkout {{remote_repository_location}}` * Clone a local repository: `git clone --local {{path/to/local/repository}}` * Clone quietly: `git clone --quiet {{remote_repository_location}}` * Clone an existing repository only fetching the 10 most recent commits on the default branch (useful to save time): `git clone --depth {{10}} {{remote_repository_location}}` * Clone an existing repository only fetching a specific branch: `git clone --branch {{name}} --single-branch {{remote_repository_location}}` * Clone an existing repository using a specific SSH command: `git clone --config core.sshCommand="{{ssh -i path/to/private_ssh_key}}" {{remote_repository_location}}`
cups-config
The cups-config command allows application developers to determine the necessary command-line options for the compiler and linker, as well as the installation directories for filters, configuration files, and drivers. All values are reported to the standard output. The cups-config command accepts the following command-line options: --api-version Reports the current API version (major.minor). --build Reports a system-specific build number. --cflags Reports the necessary compiler options. --datadir Reports the default CUPS data directory. --help Reports the program usage message. --ldflags Reports the necessary linker options. --libs Reports the necessary libraries to link to. --serverbin Reports the default CUPS binary directory, where filters and backends are stored. --serverroot Reports the default CUPS configuration file directory. --static When used with --libs, reports the static libraries instead of the default (shared) libraries. --version Reports the full version number of the CUPS installation (major.minor.patch).
# cups-config > Show technical information about your CUPS print server installation. More > information: https://www.cups.org/doc/man-cups-config.html. * Show the currently installed version of CUPS: `cups-config --version` * Show where CUPS is currently installed: `cups-config --serverbin` * Show the location of CUPS' configuration directory: `cups-config --serverroot` * Show the location of CUPS' data directory: `cups-config --datadir` * Display all available options: `cups-config --help`
mkfifo
Create named pipes (FIFOs) with the given NAMEs. Mandatory arguments to long options are mandatory for short options too. -m, --mode=MODE set file permission bits to MODE, not a=rw - umask -Z set the SELinux security context to default type --context[=CTX] like -Z, or if CTX is specified then set the SELinux or SMACK security context to CTX --help display this help and exit --version output version information and exit
# mkfifo > Makes FIFOs (named pipes). More information: > https://www.gnu.org/software/coreutils/mkfifo. * Create a named pipe at a given path: `mkfifo {{path/to/pipe}}`
logger
The logger utility saves a message, in an unspecified manner and format, containing the string operands provided by the user. The messages are expected to be evaluated later by personnel performing system administration tasks. It is implementation-defined whether messages written in locales other than the POSIX locale are effective. None.
# logger > Add messages to syslog (/var/log/syslog). More information: > https://manned.org/logger. * Log a message to syslog: `logger {{message}}` * Take input from `stdin` and log to syslog: `echo {{log_entry}} | logger` * Send the output to a remote syslog server running at a given port. Default port is 514: `echo {{log_entry}} | logger --server {{hostname}} --port {{port}}` * Use a specific tag for every line logged. Default is the name of logged in user: `echo {{log_entry}} | logger --tag {{tag}}` * Log messages with a given priority. Default is `user.notice`. See `man logger` for all priority options: `echo {{log_entry}} | logger --priority {{user.warning}}`
git-apply
Reads the supplied diff output (i.e. "a patch") and applies it to files. When running from a subdirectory in a repository, patched paths outside the directory are ignored. With the --index option the patch is also applied to the index, and with the --cached option the patch is only applied to the index. Without these options, the command applies the patch only to files, and does not require them to be in a Git repository. This command applies the patch but does not create a commit. Use git-am(1) to create commits from patches generated by git-format-patch(1) and/or received by email. <patch>... The files to read the patch from. - can be used to read from the standard input. --stat Instead of applying the patch, output diffstat for the input. Turns off "apply". --numstat Similar to --stat, but shows the number of added and deleted lines in decimal notation and the pathname without abbreviation, to make it more machine friendly. For binary files, outputs two - instead of saying 0 0. Turns off "apply". --summary Instead of applying the patch, output a condensed summary of information obtained from git diff extended headers, such as creations, renames and mode changes. Turns off "apply". --check Instead of applying the patch, see if the patch is applicable to the current working tree and/or the index file and detects errors. Turns off "apply". --index Apply the patch to both the index and the working tree (or merely check that it would apply cleanly to both if --check is in effect). Note that --index expects index entries and working tree copies for relevant paths to be identical (their contents and metadata such as file mode must match), and will raise an error if they are not, even if the patch would apply cleanly to both the index and the working tree in isolation. --cached Apply the patch to just the index, without touching the working tree. If --check is in effect, merely check that it would apply cleanly to the index entry. --intent-to-add When applying the patch only to the working tree, mark new files to be added to the index later (see --intent-to-add option in git-add(1)). This option is ignored unless running in a Git repository and --index is not specified. Note that --index could be implied by other options such as --cached or --3way. -3, --3way Attempt 3-way merge if the patch records the identity of blobs it is supposed to apply to and we have those blobs available locally, possibly leaving the conflict markers in the files in the working tree for the user to resolve. This option implies the --index option unless the --cached option is used, and is incompatible with the --reject option. When used with the --cached option, any conflicts are left at higher stages in the cache. --build-fake-ancestor=<file> Newer git diff output has embedded index information for each blob to help identify the original version that the patch applies to. When this flag is given, and if the original versions of the blobs are available locally, builds a temporary index containing those blobs. When a pure mode change is encountered (which has no index information), the information is read from the current index instead. -R, --reverse Apply the patch in reverse. --reject For atomicity, git apply by default fails the whole patch and does not touch the working tree when some of the hunks do not apply. This option makes it apply the parts of the patch that are applicable, and leave the rejected hunks in corresponding *.rej files. -z When --numstat has been given, do not munge pathnames, but use a NUL-terminated machine-readable format. Without this option, pathnames with "unusual" characters are quoted as explained for the configuration variable core.quotePath (see git-config(1)). -p<n> Remove <n> leading path components (separated by slashes) from traditional diff paths. E.g., with -p2, a patch against a/dir/file will be applied directly to file. The default is 1. -C<n> Ensure at least <n> lines of surrounding context match before and after each change. When fewer lines of surrounding context exist they all must match. By default no context is ever ignored. --unidiff-zero By default, git apply expects that the patch being applied is a unified diff with at least one line of context. This provides good safety measures, but breaks down when applying a diff generated with --unified=0. To bypass these checks use --unidiff-zero. Note, for the reasons stated above usage of context-free patches is discouraged. --apply If you use any of the options marked "Turns off apply" above, git apply reads and outputs the requested information without actually applying the patch. Give this flag after those flags to also apply the patch. --no-add When applying a patch, ignore additions made by the patch. This can be used to extract the common part between two files by first running diff on them and applying the result with this option, which would apply the deletion part but not the addition part. --allow-binary-replacement, --binary Historically we did not allow binary patch applied without an explicit permission from the user, and this flag was the way to do so. Currently we always allow binary patch application, so this is a no-op. --exclude=<path-pattern> Don’t apply changes to files matching the given path pattern. This can be useful when importing patchsets, where you want to exclude certain files or directories. --include=<path-pattern> Apply changes to files matching the given path pattern. This can be useful when importing patchsets, where you want to include certain files or directories. When --exclude and --include patterns are used, they are examined in the order they appear on the command line, and the first match determines if a patch to each path is used. A patch to a path that does not match any include/exclude pattern is used by default if there is no include pattern on the command line, and ignored if there is any include pattern. --ignore-space-change, --ignore-whitespace When applying a patch, ignore changes in whitespace in context lines if necessary. Context lines will preserve their whitespace, and they will not undergo whitespace fixing regardless of the value of the --whitespace option. New lines will still be fixed, though. --whitespace=<action> When applying a patch, detect a new or modified line that has whitespace errors. What are considered whitespace errors is controlled by core.whitespace configuration. By default, trailing whitespaces (including lines that solely consist of whitespaces) and a space character that is immediately followed by a tab character inside the initial indent of the line are considered whitespace errors. By default, the command outputs warning messages but applies the patch. When git-apply is used for statistics and not applying a patch, it defaults to nowarn. You can use different <action> values to control this behavior: • nowarn turns off the trailing whitespace warning. • warn outputs warnings for a few such errors, but applies the patch as-is (default). • fix outputs warnings for a few such errors, and applies the patch after fixing them (strip is a synonym — the tool used to consider only trailing whitespace characters as errors, and the fix involved stripping them, but modern Gits do more). • error outputs warnings for a few such errors, and refuses to apply the patch. • error-all is similar to error but shows all errors. --inaccurate-eof Under certain circumstances, some versions of diff do not correctly detect a missing new-line at the end of the file. As a result, patches created by such diff programs do not record incomplete lines correctly. This option adds support for applying such patches by working around this bug. -v, --verbose Report progress to stderr. By default, only a message about the current patch being applied will be printed. This option will cause additional information to be reported. -q, --quiet Suppress stderr output. Messages about patch status and progress will not be printed. --recount Do not trust the line counts in the hunk headers, but infer them by inspecting the patch (e.g. after editing the patch without adjusting the hunk headers appropriately). --directory=<root> Prepend <root> to all filenames. If a "-p" argument was also passed, it is applied before prepending the new root. For example, a patch that talks about updating a/git-gui.sh to b/git-gui.sh can be applied to the file in the working tree modules/git-gui/git-gui.sh by running git apply --directory=modules/git-gui. --unsafe-paths By default, a patch that affects outside the working area (either a Git controlled working tree, or the current working directory when "git apply" is used as a replacement of GNU patch) is rejected as a mistake (or a mischief). When git apply is used as a "better GNU patch", the user can pass the --unsafe-paths option to override this safety check. This option has no effect when --index or --cached is in use. --allow-empty Don’t return error for patches containing no diff. This includes empty patches and patches with commit text only.
# git apply > Apply a patch to files and/or to the index without creating a commit. See > also `git am`, which applies a patch and also creates a commit. More > information: https://git-scm.com/docs/git-apply. * Print messages about the patched files: `git apply --verbose {{path/to/file}}` * Apply and add the patched files to the index: `git apply --index {{path/to/file}}` * Apply a remote patch file: `curl -L {{https://example.com/file.patch}} | git apply` * Output diffstat for the input and apply the patch: `git apply --stat --apply {{path/to/file}}` * Apply the patch in reverse: `git apply --reverse {{path/to/file}}` * Store the patch result in the index without modifying the working tree: `git apply --cache {{path/to/file}}`
strings
The strings utility shall look for printable strings in regular files and shall write those strings to standard output. A printable string is any sequence of four (by default) or more printable characters terminated by a <newline> or NUL character. Additional implementation-defined strings may be written; see localedef. If the first argument is '-', the results are unspecified. The strings utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines, except for the unspecified usage of '-'. The following options shall be supported: -a Scan files in their entirety. If -a is not specified, it is implementation-defined what portion of each file is scanned for strings. -n number Specify the minimum string length, where the number argument is a positive decimal integer. The default shall be 4. -t format Write each string preceded by its byte offset from the start of the file. The format shall be dependent on the single character used as the format option-argument: d The offset shall be written in decimal. o The offset shall be written in octal. x The offset shall be written in hexadecimal.
# strings > Find printable strings in an object file or binary. More information: > https://manned.org/strings. * Print all strings in a binary: `strings {{path/to/file}}` * Limit results to strings at least length characters long: `strings -n {{length}} {{path/to/file}}` * Prefix each result with its offset within the file: `strings -t d {{path/to/file}}` * Prefix each result with its offset within the file in hexadecimal: `strings -t x {{path/to/file}}`
hexdump
The hexdump utility is a filter which displays the specified files, or standard input if no files are specified, in a user-specified format. Below, the length and offset arguments may be followed by the multiplicative suffixes KiB (=1024), MiB (=1024*1024), and so on for GiB, TiB, PiB, EiB, ZiB and YiB (the "iB" is optional, e.g., "K" has the same meaning as "KiB"), or the suffixes KB (=1000), MB (=1000*1000), and so on for GB, TB, PB, EB, ZB and YB. -b, --one-byte-octal One-byte octal display. Display the input offset in hexadecimal, followed by sixteen space-separated, three-column, zero-filled bytes of input data, in octal, per line. -X, --one-byte-hex One-byte hexadecimal display. Display the input offset in hexadecimal, followed by sixteen space-separated, two-column, zero-filled bytes of input data, in hexadecimal, per line. -c, --one-byte-char One-byte character display. Display the input offset in hexadecimal, followed by sixteen space-separated, three-column, space-filled characters of input data per line. -C, --canonical Canonical hex+ASCII display. Display the input offset in hexadecimal, followed by sixteen space-separated, two-column, hexadecimal bytes, followed by the same sixteen bytes in %_p format enclosed in | characters. Invoking the program as hd implies this option. -d, --two-bytes-decimal Two-byte decimal display. Display the input offset in hexadecimal, followed by eight space-separated, five-column, zero-filled, two-byte units of input data, in unsigned decimal, per line. -e, --format format_string Specify a format string to be used for displaying data. -f, --format-file file Specify a file that contains one or more newline-separated format strings. Empty lines and lines whose first non-blank character is a hash mark (#) are ignored. -L, --color[=when] Accept color units for the output. The optional argument when can be auto, never or always. If the when argument is omitted, it defaults to auto. The colors can be disabled; for the current built-in default see the --help output. See also the Colors subsection and the COLORS section below. -n, --length length Interpret only length bytes of input. -o, --two-bytes-octal Two-byte octal display. Display the input offset in hexadecimal, followed by eight space-separated, six-column, zero-filled, two-byte quantities of input data, in octal, per line. -s, --skip offset Skip offset bytes from the beginning of the input. -v, --no-squeezing The -v option causes hexdump to display all input data. Without the -v option, any number of groups of output lines which would be identical to the immediately preceding group of output lines (except for the input offsets), are replaced with a line comprised of a single asterisk. -x, --two-bytes-hex Two-byte hexadecimal display. Display the input offset in hexadecimal, followed by eight space-separated, four-column, zero-filled, two-byte quantities of input data, in hexadecimal, per line. -h, --help Display help text and exit. -V, --version Print version and exit. For each input file, hexdump sequentially copies the input to standard output, transforming the data according to the format strings specified by the -e and -f options, in the order that they were specified.
# hexdump > An ASCII, decimal, hexadecimal, octal dump. More information: > https://manned.org/hexdump. * Print the hexadecimal representation of a file, replacing duplicate lines by '*': `hexdump {{path/to/file}}` * Display the input offset in hexadecimal and its ASCII representation in two columns: `hexdump -C {{path/to/file}}` * Display the hexadecimal representation of a file, but interpret only n bytes of the input: `hexdump -C -n{{number_of_bytes}} {{path/to/file}}` * Don't replace duplicate lines with '*': `hexdump --no-squeezing {{path/to/file}}`
git-update-index
Modifies the index. Each file mentioned is updated into the index and any unmerged or needs updating state is cleared. See also git-add(1) for a more user-friendly way to do some of the most common operations on the index. The way git update-index handles files it is told about can be modified using the various options: --add If a specified file isn’t in the index already then it’s added. Default behaviour is to ignore new files. --remove If a specified file is in the index but is missing then it’s removed. Default behavior is to ignore removed file. --refresh Looks at the current index and checks to see if merges or updates are needed by checking stat() information. -q Quiet. If --refresh finds that the index needs an update, the default behavior is to error out. This option makes git update-index continue anyway. --ignore-submodules Do not try to update submodules. This option is only respected when passed before --refresh. --unmerged If --refresh finds unmerged changes in the index, the default behavior is to error out. This option makes git update-index continue anyway. --ignore-missing Ignores missing files during a --refresh --cacheinfo <mode>,<object>,<path>, --cacheinfo <mode> <object> <path> Directly insert the specified info into the index. For backward compatibility, you can also give these three arguments as three separate parameters, but new users are encouraged to use a single-parameter form. --index-info Read index information from stdin. --chmod=(+|-)x Set the execute permissions on the updated files. --[no-]assume-unchanged When this flag is specified, the object names recorded for the paths are not updated. Instead, this option sets/unsets the "assume unchanged" bit for the paths. When the "assume unchanged" bit is on, the user promises not to change the file and allows Git to assume that the working tree file matches what is recorded in the index. If you want to change the working tree file, you need to unset the bit to tell Git. This is sometimes helpful when working with a big project on a filesystem that has very slow lstat(2) system call (e.g. cifs). Git will fail (gracefully) in case it needs to modify this file in the index e.g. when merging in a commit; thus, in case the assumed-untracked file is changed upstream, you will need to handle the situation manually. --really-refresh Like --refresh, but checks stat information unconditionally, without regard to the "assume unchanged" setting. --[no-]skip-worktree When one of these flags is specified, the object name recorded for the paths are not updated. Instead, these options set and unset the "skip-worktree" bit for the paths. See section "Skip-worktree bit" below for more information. --[no-]ignore-skip-worktree-entries Do not remove skip-worktree (AKA "index-only") entries even when the --remove option was specified. --[no-]fsmonitor-valid When one of these flags is specified, the object name recorded for the paths are not updated. Instead, these options set and unset the "fsmonitor valid" bit for the paths. See section "File System Monitor" below for more information. -g, --again Runs git update-index itself on the paths whose index entries are different from those from the HEAD commit. --unresolve Restores the unmerged or needs updating state of a file during a merge if it was cleared by accident. --info-only Do not create objects in the object database for all <file> arguments that follow this flag; just insert their object IDs into the index. --force-remove Remove the file from the index even when the working directory still has such a file. (Implies --remove.) --replace By default, when a file path exists in the index, git update-index refuses an attempt to add path/file. Similarly if a file path/file exists, a file path cannot be added. With --replace flag, existing entries that conflict with the entry being added are automatically removed with warning messages. --stdin Instead of taking list of paths from the command line, read list of paths from the standard input. Paths are separated by LF (i.e. one path per line) by default. --verbose Report what is being added and removed from index. --index-version <n> Write the resulting index out in the named on-disk format version. Supported versions are 2, 3 and 4. The current default version is 2 or 3, depending on whether extra features are used, such as git add -N. Version 4 performs a simple pathname compression that reduces index size by 30%-50% on large repositories, which results in faster load time. Version 4 is relatively young (first released in 1.8.0 in October 2012). Other Git implementations such as JGit and libgit2 may not support it yet. -z Only meaningful with --stdin or --index-info; paths are separated with NUL character instead of LF. --split-index, --no-split-index Enable or disable split index mode. If split-index mode is already enabled and --split-index is given again, all changes in $GIT_DIR/index are pushed back to the shared index file. These options take effect whatever the value of the core.splitIndex configuration variable (see git-config(1)). But a warning is emitted when the change goes against the configured value, as the configured value will take effect next time the index is read and this will remove the intended effect of the option. --untracked-cache, --no-untracked-cache Enable or disable untracked cache feature. Please use --test-untracked-cache before enabling it. These options take effect whatever the value of the core.untrackedCache configuration variable (see git-config(1)). But a warning is emitted when the change goes against the configured value, as the configured value will take effect next time the index is read and this will remove the intended effect of the option. --test-untracked-cache Only perform tests on the working directory to make sure untracked cache can be used. You have to manually enable untracked cache using --untracked-cache or --force-untracked-cache or the core.untrackedCache configuration variable afterwards if you really want to use it. If a test fails the exit code is 1 and a message explains what is not working as needed, otherwise the exit code is 0 and OK is printed. --force-untracked-cache Same as --untracked-cache. Provided for backwards compatibility with older versions of Git where --untracked-cache used to imply --test-untracked-cache but this option would enable the extension unconditionally. --fsmonitor, --no-fsmonitor Enable or disable files system monitor feature. These options take effect whatever the value of the core.fsmonitor configuration variable (see git-config(1)). But a warning is emitted when the change goes against the configured value, as the configured value will take effect next time the index is read and this will remove the intended effect of the option. -- Do not interpret any more arguments as options. <file> Files to act on. Note that files beginning with . are discarded. This includes ./file and dir/./file. If you don’t want this, then use cleaner names. The same applies to directories ending / and paths with //
# git update-index > Git command for manipulating the index. More information: https://git- > scm.com/docs/git-update-index. * Pretend that a modified file is unchanged (`git status` will not show this as changed): `git update-index --skip-worktree {{path/to/modified_file}}`
valgrind
Valgrind is a flexible program for debugging and profiling Linux executables. It consists of a core, which provides a synthetic CPU in software, and a series of debugging and profiling tools. The architecture is modular, so that new tools can be created easily and without disturbing the existing structure. Some of the options described below work with all Valgrind tools, and some only work with a few or one. The section MEMCHECK OPTIONS and those below it describe tool-specific options. This manual page covers only basic usage and options. For more comprehensive information, please see the HTML documentation on your system: $INSTALL/share/doc/valgrind/html/index.html, or online: http://www.valgrind.org/docs/manual/index.html.
# valgrind > Wrapper for a set of expert tools for profiling, optimizing and debugging > programs. Commonly used tools include `memcheck`, `cachegrind`, `callgrind`, > `massif`, `helgrind`, and `drd`. More information: http://www.valgrind.org. * Use the (default) Memcheck tool to show a diagnostic of memory usage by `program`: `valgrind {{program}}` * Use Memcheck to report all possible memory leaks of `program` in full detail: `valgrind --leak-check=full --show-leak-kinds=all {{program}}` * Use the Cachegrind tool to profile and log CPU cache operations of `program`: `valgrind --tool=cachegrind {{program}}` * Use the Massif tool to profile and log heap memory and stack usage of `program`: `valgrind --tool=massif --stacks=yes {{program}}`
od
Write an unambiguous representation, octal bytes by default, of FILE to standard output. With more than one FILE argument, concatenate them in the listed order to form the input. With no FILE, or when FILE is -, read standard input. If first and second call formats both apply, the second format is assumed if the last operand begins with + or (if there are 2 operands) a digit. An OFFSET operand means -j OFFSET. LABEL is the pseudo-address at first byte printed, incremented when dump is progressing. For OFFSET and LABEL, a 0x or 0X prefix indicates hexadecimal; suffixes may be . for octal and b for multiply by 512. Mandatory arguments to long options are mandatory for short options too. -A, --address-radix=RADIX output format for file offsets; RADIX is one of [doxn], for Decimal, Octal, Hex or None --endian={big|little} swap input bytes according the specified order -j, --skip-bytes=BYTES skip BYTES input bytes first -N, --read-bytes=BYTES limit dump to BYTES input bytes -S BYTES, --strings[=BYTES] output strings of at least BYTES graphic chars; 3 is implied when BYTES is not specified -t, --format=TYPE select output format or formats -v, --output-duplicates do not use * to mark line suppression -w[BYTES], --width[=BYTES] output BYTES bytes per output line; 32 is implied when BYTES is not specified --traditional accept arguments in third form above --help display this help and exit --version output version information and exit Traditional format specifications may be intermixed; they accumulate: -a same as -t a, select named characters, ignoring high-order bit -b same as -t o1, select octal bytes -c same as -t c, select printable characters or backslash escapes -d same as -t u2, select unsigned decimal 2-byte units -f same as -t fF, select floats -i same as -t dI, select decimal ints -l same as -t dL, select decimal longs -o same as -t o2, select octal 2-byte units -s same as -t d2, select decimal 2-byte units -x same as -t x2, select hexadecimal 2-byte units TYPE is made up of one or more of these specifications: a named character, ignoring high-order bit c printable character or backslash escape d[SIZE] signed decimal, SIZE bytes per integer f[SIZE] floating point, SIZE bytes per float o[SIZE] octal, SIZE bytes per integer u[SIZE] unsigned decimal, SIZE bytes per integer x[SIZE] hexadecimal, SIZE bytes per integer SIZE is a number. For TYPE in [doux], SIZE may also be C for sizeof(char), S for sizeof(short), I for sizeof(int) or L for sizeof(long). If TYPE is f, SIZE may also be F for sizeof(float), D for sizeof(double) or L for sizeof(long double). Adding a z suffix to any type displays printable characters at the end of each output line. BYTES is hex with 0x or 0X prefix, and may have a multiplier suffix: b 512 KB 1000 K 1024 MB 1000*1000 M 1024*1024 and so on for G, T, P, E, Z, Y, R, Q. Binary prefixes can be used, too: KiB=K, MiB=M, and so on.
# od > Display file contents in octal, decimal or hexadecimal format. Optionally > display the byte offsets and/or printable representation for each line. More > information: https://www.gnu.org/software/coreutils/od. * Display file using default settings: octal format, 8 bytes per line, byte offsets in octal, and duplicate lines replaced with `*`: `od {{path/to/file}}` * Display file in verbose mode, i.e. without replacing duplicate lines with `*`: `od -v {{path/to/file}}` * Display file in hexadecimal format (2-byte units), with byte offsets in decimal format: `od --format={{x}} --address-radix={{d}} -v {{path/to/file}}` * Display file in hexadecimal format (1-byte units), and 4 bytes per line: `od --format={{x1}} --width={{4}} -v {{path/to/file}}` * Display file in hexadecimal format along with its character representation, and do not print byte offsets: `od --format={{xz}} --address-radix={{n}} -v {{path/to/file}}` * Read only 100 bytes of a file starting from the 500th byte: `od --read-bytes {{100}} --skip-bytes={{500}} -v {{path/to/file}}`
uuencode
The uuencode utility shall write an encoded version of the named input file, or standard input if no file is specified, to standard output. The output shall be encoded using one of the algorithms described in the STDOUT section and shall include the file access permission bits (in chmod octal or symbolic notation) of the input file and the decode_pathname, for re-creation of the file on another system that conforms to this volume of POSIX.1‐2017. The uuencode utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following option shall be supported by the implementation: -m Encode the output using the MIME Base64 algorithm described in STDOUT. If -m is not specified, the historical algorithm described in STDOUT shall be used.
# uuencode > Encode binary files into ASCII for transport via mediums that only support > simple ASCII encoding. More information: https://manned.org/uuencode. * Encode a file and print the result to `stdout`: `uuencode {{path/to/input_file}} {{output_file_name_after_decoding}}` * Encode a file and write the result to a file: `uuencode -o {{path/to/output_file}} {{path/to/input_file}} {{output_file_name_after_decoding}}` * Encode a file using Base64 instead of the default uuencode encoding and write the result to a file: `uuencode -m -o {{path/to/output_file}} {{path/to/input_file}} {{output_file_name_after_decoding}}`
cmp
Compare two files byte by byte. The optional SKIP1 and SKIP2 specify the number of bytes to skip at the beginning of each file (zero by default). Mandatory arguments to long options are mandatory for short options too. -b, --print-bytes print differing bytes -i, --ignore-initial=SKIP skip first SKIP bytes of both inputs -i, --ignore-initial=SKIP1:SKIP2 skip first SKIP1 bytes of FILE1 and first SKIP2 bytes of FILE2 -l, --verbose output byte numbers and differing byte values -n, --bytes=LIMIT compare at most LIMIT bytes -s, --quiet, --silent suppress all normal output --help display this help and exit -v, --version output version information and exit SKIP values may be followed by the following multiplicative suffixes: kB 1000, K 1024, MB 1,000,000, M 1,048,576, GB 1,000,000,000, G 1,073,741,824, and so on for T, P, E, Z, Y. If a FILE is '-' or missing, read standard input. Exit status is 0 if inputs are the same, 1 if different, 2 if trouble.
# cmp > Compare two files byte by byte. More information: > https://www.gnu.org/software/diffutils/manual/html_node/Invoking-cmp.html. * Output char and line number of the first difference between two files: `cmp {{path/to/file1}} {{path/to/file2}}` * Output info of the first difference: char, line number, bytes, and values: `cmp --print-bytes {{path/to/file1}} {{path/to/file2}}` * Output the byte numbers and values of every difference: `cmp --verbose {{path/to/file1}} {{path/to/file2}}` * Compare files but output nothing, yield only the exit status: `cmp --quiet {{path/to/file1}} {{path/to/file2}}`
hostname
Print or set the hostname of the current system. --help display this help and exit --version output version information and exit
# hostname > Show or set the system's host name. More information: > https://manned.org/hostname. * Show current host name: `hostname` * Show the network address of the host name: `hostname -i` * Show all network addresses of the host: `hostname -I` * Show the FQDN (Fully Qualified Domain Name): `hostname --fqdn` * Set current host name: `hostname {{new_hostname}}`
od
Write an unambiguous representation, octal bytes by default, of FILE to standard output. With more than one FILE argument, concatenate them in the listed order to form the input. With no FILE, or when FILE is -, read standard input. If first and second call formats both apply, the second format is assumed if the last operand begins with + or (if there are 2 operands) a digit. An OFFSET operand means -j OFFSET. LABEL is the pseudo-address at first byte printed, incremented when dump is progressing. For OFFSET and LABEL, a 0x or 0X prefix indicates hexadecimal; suffixes may be . for octal and b for multiply by 512. Mandatory arguments to long options are mandatory for short options too. -A, --address-radix=RADIX output format for file offsets; RADIX is one of [doxn], for Decimal, Octal, Hex or None --endian={big|little} swap input bytes according the specified order -j, --skip-bytes=BYTES skip BYTES input bytes first -N, --read-bytes=BYTES limit dump to BYTES input bytes -S BYTES, --strings[=BYTES] output strings of at least BYTES graphic chars; 3 is implied when BYTES is not specified -t, --format=TYPE select output format or formats -v, --output-duplicates do not use * to mark line suppression -w[BYTES], --width[=BYTES] output BYTES bytes per output line; 32 is implied when BYTES is not specified --traditional accept arguments in third form above --help display this help and exit --version output version information and exit Traditional format specifications may be intermixed; they accumulate: -a same as -t a, select named characters, ignoring high-order bit -b same as -t o1, select octal bytes -c same as -t c, select printable characters or backslash escapes -d same as -t u2, select unsigned decimal 2-byte units -f same as -t fF, select floats -i same as -t dI, select decimal ints -l same as -t dL, select decimal longs -o same as -t o2, select octal 2-byte units -s same as -t d2, select decimal 2-byte units -x same as -t x2, select hexadecimal 2-byte units TYPE is made up of one or more of these specifications: a named character, ignoring high-order bit c printable character or backslash escape d[SIZE] signed decimal, SIZE bytes per integer f[SIZE] floating point, SIZE bytes per float o[SIZE] octal, SIZE bytes per integer u[SIZE] unsigned decimal, SIZE bytes per integer x[SIZE] hexadecimal, SIZE bytes per integer SIZE is a number. For TYPE in [doux], SIZE may also be C for sizeof(char), S for sizeof(short), I for sizeof(int) or L for sizeof(long). If TYPE is f, SIZE may also be F for sizeof(float), D for sizeof(double) or L for sizeof(long double). Adding a z suffix to any type displays printable characters at the end of each output line. BYTES is hex with 0x or 0X prefix, and may have a multiplier suffix: b 512 KB 1000 K 1024 MB 1000*1000 M 1024*1024 and so on for G, T, P, E, Z, Y, R, Q. Binary prefixes can be used, too: KiB=K, MiB=M, and so on.
# od > Display file contents in octal, decimal or hexadecimal format. Optionally > display the byte offsets and/or printable representation for each line. More > information: https://www.gnu.org/software/coreutils/od. * Display file using default settings: octal format, 8 bytes per line, byte offsets in octal, and duplicate lines replaced with `*`: `od {{path/to/file}}` * Display file in verbose mode, i.e. without replacing duplicate lines with `*`: `od -v {{path/to/file}}` * Display file in hexadecimal format (2-byte units), with byte offsets in decimal format: `od --format={{x}} --address-radix={{d}} -v {{path/to/file}}` * Display file in hexadecimal format (1-byte units), and 4 bytes per line: `od --format={{x1}} --width={{4}} -v {{path/to/file}}` * Display file in hexadecimal format along with its character representation, and do not print byte offsets: `od --format={{xz}} --address-radix={{n}} -v {{path/to/file}}` * Read only 100 bytes of a file starting from the 500th byte: `od --read-bytes {{100}} --skip-bytes={{500}} -v {{path/to/file}}`
b2sum
Print or check BLAKE2b (512-bit) checksums. With no FILE, or when FILE is -, read standard input. Mandatory arguments to long options are mandatory for short options too. -b, --binary read in binary mode -c, --check read checksums from the FILEs and check them -l, --length=BITS digest length in bits; must not exceed the max for the blake2 algorithm and must be a multiple of 8 --tag create a BSD-style checksum -t, --text read in text mode (default) -z, --zero end each output line with NUL, not newline, and disable file name escaping The following five options are useful only when verifying checksums: --ignore-missing don't fail or report status for missing files --quiet don't print OK for each successfully verified file --status don't output anything, status code shows success --strict exit non-zero for improperly formatted checksum lines -w, --warn warn about improperly formatted checksum lines --help display this help and exit --version output version information and exit The sums are computed as described in RFC 7693. When checking, the input should be a former output of this program. The default mode is to print a line with: checksum, a space, a character indicating input mode ('*' for binary, ' ' for text or where binary is insignificant), and name for each FILE. Note: There is no difference between binary mode and text mode on GNU systems.
# b2sum > Calculate BLAKE2 cryptographic checksums. More information: > https://www.gnu.org/software/coreutils/b2sum. * Calculate the BLAKE2 checksum for one or more files: `b2sum {{path/to/file1 path/to/file2 ...}}` * Calculate and save the list of BLAKE2 checksums to a file: `b2sum {{path/to/file1 path/to/file2 ...}} > {{path/to/file.b2}}` * Calculate a BLAKE2 checksum from `stdin`: `{{command}} | b2sum` * Read a file of BLAKE2 sums and filenames and verify all files have matching checksums: `b2sum --check {{path/to/file.b2}}` * Only show a message for missing files or when verification fails: `b2sum --check --quiet {{path/to/file.b2}}` * Only show a message when verification fails, ignoring missing files: `b2sum --ignore-missing --check --quiet {{path/to/file.b2}}`
git-status
Displays paths that have differences between the index file and the current HEAD commit, paths that have differences between the working tree and the index file, and paths in the working tree that are not tracked by Git (and are not ignored by gitignore(5)). The first are what you would commit by running git commit; the second and third are what you could commit by running git add before running git commit. -s, --short Give the output in the short-format. -b, --branch Show the branch and tracking info even in short-format. --show-stash Show the number of entries currently stashed away. --porcelain[=<version>] Give the output in an easy-to-parse format for scripts. This is similar to the short output, but will remain stable across Git versions and regardless of user configuration. See below for details. The version parameter is used to specify the format version. This is optional and defaults to the original version v1 format. --long Give the output in the long-format. This is the default. -v, --verbose In addition to the names of files that have been changed, also show the textual changes that are staged to be committed (i.e., like the output of git diff --cached). If -v is specified twice, then also show the changes in the working tree that have not yet been staged (i.e., like the output of git diff). -u[<mode>], --untracked-files[=<mode>] Show untracked files. The mode parameter is used to specify the handling of untracked files. It is optional: it defaults to all, and if specified, it must be stuck to the option (e.g. -uno, but not -u no). The possible options are: • no - Show no untracked files. • normal - Shows untracked files and directories. • all - Also shows individual files in untracked directories. When -u option is not used, untracked files and directories are shown (i.e. the same as specifying normal), to help you avoid forgetting to add newly created files. Because it takes extra work to find untracked files in the filesystem, this mode may take some time in a large working tree. Consider enabling untracked cache and split index if supported (see git update-index --untracked-cache and git update-index --split-index), Otherwise you can use no to have git status return more quickly without showing untracked files. The default can be changed using the status.showUntrackedFiles configuration variable documented in git-config(1). --ignore-submodules[=<when>] Ignore changes to submodules when looking for changes. <when> can be either "none", "untracked", "dirty" or "all", which is the default. Using "none" will consider the submodule modified when it either contains untracked or modified files or its HEAD differs from the commit recorded in the superproject and can be used to override any settings of the ignore option in git-config(1) or gitmodules(5). When "untracked" is used submodules are not considered dirty when they only contain untracked content (but they are still scanned for modified content). Using "dirty" ignores all changes to the work tree of submodules, only changes to the commits stored in the superproject are shown (this was the behavior before 1.7.0). Using "all" hides all changes to submodules (and suppresses the output of submodule summaries when the config option status.submoduleSummary is set). --ignored[=<mode>] Show ignored files as well. The mode parameter is used to specify the handling of ignored files. It is optional: it defaults to traditional. The possible options are: • traditional - Shows ignored files and directories, unless --untracked-files=all is specified, in which case individual files in ignored directories are displayed. • no - Show no ignored files. • matching - Shows ignored files and directories matching an ignore pattern. When matching mode is specified, paths that explicitly match an ignored pattern are shown. If a directory matches an ignore pattern, then it is shown, but not paths contained in the ignored directory. If a directory does not match an ignore pattern, but all contents are ignored, then the directory is not shown, but all contents are shown. -z Terminate entries with NUL, instead of LF. This implies the --porcelain=v1 output format if no other format is given. --column[=<options>], --no-column Display untracked files in columns. See configuration variable column.status for option syntax. --column and --no-column without options are equivalent to always and never respectively. --ahead-behind, --no-ahead-behind Display or do not display detailed ahead/behind counts for the branch relative to its upstream branch. Defaults to true. --renames, --no-renames Turn on/off rename detection regardless of user configuration. See also git-diff(1) --no-renames. --find-renames[=<n>] Turn on rename detection, optionally setting the similarity threshold. See also git-diff(1) --find-renames. <pathspec>... See the pathspec entry in gitglossary(7).
# git status > Show the changes to files in a Git repository. Lists changed, added and > deleted files compared to the currently checked-out commit. More > information: https://git-scm.com/docs/git-status. * Show changed files which are not yet added for commit: `git status` * Give output in [s]hort format: `git status -s` * Don't show untracked files in the output: `git status --untracked-files=no` * Show output in [s]hort format along with [b]ranch info: `git status -sb`
time
The time utility shall invoke the utility named by the utility operand with arguments supplied as the argument operands and write a message to standard error that lists timing statistics for the utility. The message shall include the following information: * The elapsed (real) time between invocation of utility and its termination. * The User CPU time, equivalent to the sum of the tms_utime and tms_cutime fields returned by the times() function defined in the System Interfaces volume of POSIX.1‐2017 for the process in which utility is executed. * The System CPU time, equivalent to the sum of the tms_stime and tms_cstime fields returned by the times() function for the process in which utility is executed. The precision of the timing shall be no less than the granularity defined for the size of the clock tick unit on the system, but the results shall be reported in terms of standard time units (for example, 0.02 seconds, 00:00:00.02, 1m33.75s, 365.21 seconds), not numbers of clock ticks. When time is used as part of a pipeline, the times reported are unspecified, except when it is the sole command within a grouping command (see Section 2.9.4.1, Grouping Commands) in that pipeline. For example, the commands on the left are unspecified; those on the right report on utilities a and c, respectively: time a | b | c { time a; } | b | c a | b | time c a | b | (time c) The time utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following option shall be supported: -p Write the timing output to standard error in the format shown in the STDERR section.
# time > Measure how long a command took to run. Note: `time` can either exist as a > shell builtin, a standalone program or both. More information: > https://manned.org/time. * Run the `command` and print the time measurements to `stdout`: `time {{command}}`
split
The split utility shall read an input file and write zero or more output files. The default size of each output file shall be 1000 lines. The size of the output files can be modified by specification of the -b or -l options. Each output file shall be created with a unique suffix. The suffix shall consist of exactly suffix_length lowercase letters from the POSIX locale. The letters of the suffix shall be used as if they were a base-26 digit system, with the first suffix to be created consisting of all 'a' characters, the second with a 'b' replacing the last 'a', and so on, until a name of all 'z' characters is created. By default, the names of the output files shall be 'x', followed by a two-character suffix from the character set as described above, starting with "aa", "ab", "ac", and so on, and continuing until the suffix "zz", for a maximum of 676 files. If the number of files required exceeds the maximum allowed by the suffix length provided, such that the last allowable file would be larger than the requested size, the split utility shall fail after creating the last file with a valid suffix; split shall not delete the files it created with valid suffixes. If the file limit is not exceeded, the last file created shall contain the remainder of the input file, and may be smaller than the requested size. If the input is an empty file, no output file shall be created and this shall not be considered to be an error. The split utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following options shall be supported: -a suffix_length Use suffix_length letters to form the suffix portion of the filenames of the split file. If -a is not specified, the default suffix length shall be two. If the sum of the name operand and the suffix_length option-argument would create a filename exceeding {NAME_MAX} bytes, an error shall result; split shall exit with a diagnostic message and no files shall be created. -b n Split a file into pieces n bytes in size. -b nk Split a file into pieces n*1024 bytes in size. -b nm Split a file into pieces n*1048576 bytes in size. -l line_count Specify the number of lines in each resulting file piece. The line_count argument is an unsigned decimal integer. The default is 1000. If the input does not end with a <newline>, the partial line shall be included in the last output file.
# split > Split a file into pieces. More information: https://ss64.com/osx/split.html. * Split a file, each split having 10 lines (except the last split): `split -l {{10}} {{filename}}` * Split a file by a regular expression. The matching line will be the first line of the next output file: `split -p {{cat|^[dh]og}} {{filename}}` * Split a file with 512 bytes in each split (except the last split; use 512k for kilobytes and 512m for megabytes): `split -b {{512}} {{filename}}` * Split a file into 5 files. File is split such that each split has same size (except the last split): `split -n {{5}} {{filename}}`
su
su allows commands to be run with a substitute user and group ID. When called with no user specified, su defaults to running an interactive shell as root. When user is specified, additional arguments can be supplied, in which case they are passed to the shell. For backward compatibility, su defaults to not change the current directory and to only set the environment variables HOME and SHELL (plus USER and LOGNAME if the target user is not root). It is recommended to always use the --login option (instead of its shortcut -) to avoid side effects caused by mixing environments. This version of su uses PAM for authentication, account and session management. Some configuration options found in other su implementations, such as support for a wheel group, have to be configured via PAM. su is mostly designed for unprivileged users, the recommended solution for privileged users (e.g., scripts executed by root) is to use non-set-user-ID command runuser(1) that does not require authentication and provides separate PAM configuration. If the PAM session is not required at all then the recommended solution is to use command setpriv(1). Note that su in all cases uses PAM (pam_getenvlist(3)) to do the final environment modification. Command-line options such as --login and --preserve-environment affect the environment before it is modified by PAM. Since version 2.38 su resets process resource limits RLIMIT_NICE, RLIMIT_RTPRIO, RLIMIT_FSIZE, RLIMIT_AS and RLIMIT_NOFILE. -c, --command=command Pass command to the shell with the -c option. -f, --fast Pass -f to the shell, which may or may not be useful, depending on the shell. -g, --group=group Specify the primary group. This option is available to the root user only. -G, --supp-group=group Specify a supplementary group. This option is available to the root user only. The first specified supplementary group is also used as a primary group if the option --group is not specified. -, -l, --login Start the shell as a login shell with an environment similar to a real login: • clears all the environment variables except TERM and variables specified by --whitelist-environment • initializes the environment variables HOME, SHELL, USER, LOGNAME, and PATH • changes to the target user’s home directory • sets argv[0] of the shell to '-' in order to make the shell a login shell -m, -p, --preserve-environment Preserve the entire environment, i.e., do not set HOME, SHELL, USER or LOGNAME. This option is ignored if the option --login is specified. -P, --pty Create a pseudo-terminal for the session. The independent terminal provides better security as the user does not share a terminal with the original session. This can be used to avoid TIOCSTI ioctl terminal injection and other security attacks against terminal file descriptors. The entire session can also be moved to the background (e.g., su --pty - username -c application &). If the pseudo-terminal is enabled, then su works as a proxy between the sessions (sync stdin and stdout). This feature is mostly designed for interactive sessions. If the standard input is not a terminal, but for example a pipe (e.g., echo "date" | su --pty), then the ECHO flag for the pseudo-terminal is disabled to avoid messy output. -s, --shell=shell Run the specified shell instead of the default. The shell to run is selected according to the following rules, in order: • the shell specified with --shell • the shell specified in the environment variable SHELL, if the --preserve-environment option is used • the shell listed in the passwd entry of the target user • /bin/sh If the target user has a restricted shell (i.e., not listed in /etc/shells), the --shell option and the SHELL environment variables are ignored unless the calling user is root. --session-command=command Same as -c, but do not create a new session. (Discouraged.) -w, --whitelist-environment=list Don’t reset the environment variables specified in the comma-separated list when clearing the environment for --login. The whitelist is ignored for the environment variables HOME, SHELL, USER, LOGNAME, and PATH. -h, --help Display help text and exit. -V, --version Print version and exit.
# su > Switch shell to another user. More information: https://manned.org/su. * Switch to superuser (requires the root password): `su` * Switch to a given user (requires the user's password): `su {{username}}` * Switch to a given user and simulate a full login shell: `su - {{username}}` * Execute a command as another user: `su - {{username}} -c "{{command}}"`
w
w displays information about the users currently on the machine, and their processes. The header shows, in this order, the current time, how long the system has been running, how many users are currently logged on, and the system load averages for the past 1, 5, and 15 minutes. The following entries are displayed for each user: login name, the tty name, the remote host, login time, idle time, JCPU, PCPU, and the command line of their current process. The JCPU time is the time used by all processes attached to the tty. It does not include past background jobs, but does include currently running background jobs. The PCPU time is the time used by the current process, named in the "what" field.
# w > Show who is logged on and what they are doing. Print user login, TTY, remote > host, login time, idle time, current process. More information: > https://ss64.com/osx/w.html. * Show logged-in users information: `w` * Show logged-in users information without a header: `w -h` * Show information about logged-in users, sorted by their idle time: `w -i`