Command
stringlengths 1
20
| Text
stringlengths 86
185k
| Summary
stringlengths 101
1.77k
|
---|---|---|
chgrp | The chgrp utility shall set the group ID of the file named by each file operand to the group ID specified by the group operand. For each file operand, or, if the -R option is used, each file encountered while walking the directory trees specified by the file operands, the chgrp utility shall perform actions equivalent to the chown() function defined in the System Interfaces volume of POSIX.1‐2017, called with the following arguments: * The file operand shall be used as the path argument. * The user ID of the file shall be used as the owner argument. * The specified group ID shall be used as the group argument. Unless chgrp is invoked by a process with appropriate privileges, the set-user-ID and set-group-ID bits of a regular file shall be cleared upon successful completion; the set-user-ID and set- group-ID bits of other file types may be cleared. The chgrp utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following options shall be supported by the implementation: -h For each file operand that names a file of type symbolic link, chgrp shall attempt to set the group ID of the symbolic link instead of the file referenced by the symbolic link. -H If the -R option is specified and a symbolic link referencing a file of type directory is specified on the command line, chgrp shall change the group of the directory referenced by the symbolic link and all files in the file hierarchy below it. -L If the -R option is specified and a symbolic link referencing a file of type directory is specified on the command line or encountered during the traversal of a file hierarchy, chgrp shall change the group of the directory referenced by the symbolic link and all files in the file hierarchy below it. -P If the -R option is specified and a symbolic link is specified on the command line or encountered during the traversal of a file hierarchy, chgrp shall change the group ID of the symbolic link. The chgrp utility shall not follow the symbolic link to any other part of the file hierarchy. -R Recursively change file group IDs. For each file operand that names a directory, chgrp shall change the group of the directory and all files in the file hierarchy below it. Unless a -H, -L, or -P option is specified, it is unspecified which of these options will be used as the default. Specifying more than one of the mutually-exclusive options -H, -L, and -P shall not be considered an error. The last option specified shall determine the behavior of the utility. | # chgrp
> Change group ownership of files and directories. More information:
> https://www.gnu.org/software/coreutils/chgrp.
* Change the owner group of a file/directory:
`chgrp {{group}} {{path/to/file_or_directory}}`
* Recursively change the owner group of a directory and its contents:
`chgrp -R {{group}} {{path/to/directory}}`
* Change the owner group of a symbolic link:
`chgrp -h {{group}} {{path/to/symlink}}`
* Change the owner group of a file/directory to match a reference file:
`chgrp --reference={{path/to/reference_file}} {{path/to/file_or_directory}}` |
more | more is a filter for paging through text one screenful at a time. This version is especially primitive. Users should realize that less(1) provides more(1) emulation plus extensive enhancements. Options are also taken from the environment variable MORE (make sure to precede them with a dash (-)) but command-line options will override those. -d, --silent Prompt with "[Press space to continue, 'q' to quit.]", and display "[Press 'h' for instructions.]" instead of ringing the bell when an illegal key is pressed. -l, --logical Do not pause after any line containing a ^L (form feed). -e, --exit-on-eof Exit on End-Of-File, enabled by default if POSIXLY_CORRECT environment variable is not set or if not executed on terminal. -f, --no-pause Count logical lines, rather than screen lines (i.e., long lines are not folded). -p, --print-over Do not scroll. Instead, clear the whole screen and then display the text. Notice that this option is switched on automatically if the executable is named page. -c, --clean-print Do not scroll. Instead, paint each screen from the top, clearing the remainder of each line as it is displayed. -s, --squeeze Squeeze multiple blank lines into one. -u, --plain Suppress underlining. This option is silently ignored as backwards compatibility. -n, --lines number Specify the number of lines per screenful. The number argument is a positive decimal integer. The --lines option shall override any values obtained from any other source, such as number of lines reported by terminal. -number A numeric option means the same as --lines option argument. +number Start displaying each file at line number. +/string The string to be searched in each file before starting to display it. -h, --help Display help text and exit. -V, --version Print version and exit. | # more
> Open a file for interactive reading, allowing scrolling and search. More
> information: https://manned.org/more.
* Open a file:
`more {{path/to/file}}`
* Open a file displaying from a specific line:
`more +{{line_number}} {{path/to/file}}`
* Display help:
`more --help`
* Go to the next page:
`<Space>`
* Search for a string (press `n` to go to the next match):
`/{{something}}`
* Exit:
`q`
* Display help about interactive commands:
`h` |
git-hash-object | Computes the object ID value for an object with specified type with the contents of the named file (which can be outside of the work tree), and optionally writes the resulting object into the object database. Reports its object ID to its standard output. When <type> is not specified, it defaults to "blob". -t <type> Specify the type (default: "blob"). -w Actually write the object into the object database. --stdin Read the object from standard input instead of from a file. --stdin-paths Read file names from the standard input, one per line, instead of from the command-line. --path Hash object as it were located at the given path. The location of file does not directly influence on the hash value, but path is used to determine what Git filters should be applied to the object before it can be placed to the object database, and, as result of applying filters, the actual blob put into the object database may differ from the given file. This option is mainly useful for hashing temporary files located outside of the working directory or files read from stdin. --no-filters Hash the contents as is, ignoring any input filter that would have been chosen by the attributes mechanism, including the end-of-line conversion. If the file is read from standard input then this is always implied, unless the --path option is given. --literally Allow --stdin to hash any garbage into a loose object which might not otherwise pass standard object parsing or git-fsck checks. Useful for stress-testing Git itself or reproducing characteristics of corrupt or bogus objects encountered in the wild. | # git hash-object
> Computes the unique hash key of content and optionally creates an object
> with specified type. More information: https://git-scm.com/docs/git-hash-
> object.
* Compute the object ID without storing it:
`git hash-object {{path/to/file}}`
* Compute the object ID and store it in the Git database:
`git hash-object -w {{path/to/file}}`
* Compute the object ID specifying the object type:
`git hash-object -t {{blob|commit|tag|tree}} {{path/to/file}}`
* Compute the object ID from `stdin`:
`cat {{path/to/file}} | git hash-object --stdin` |
id | If no user operand is provided, the id utility shall write the user and group IDs and the corresponding user and group names of the invoking process to standard output. If the effective and real IDs do not match, both shall be written. If multiple groups are supported by the underlying system (see the description of {NGROUPS_MAX} in the System Interfaces volume of POSIX.1‐2017), the supplementary group affiliations of the invoking process shall also be written. If a user operand is provided and the process has appropriate privileges, the user and group IDs of the selected user shall be written. In this case, effective IDs shall be assumed to be identical to real IDs. If the selected user has more than one allowable group membership listed in the group database, these shall be written in the same manner as the supplementary groups described in the preceding paragraph. The id utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following options shall be supported: -G Output all different group IDs (effective, real, and supplementary) only, using the format "%u\n". If there is more than one distinct group affiliation, output each such affiliation, using the format " %u", before the <newline> is output. -g Output only the effective group ID, using the format "%u\n". -n Output the name in the format "%s" instead of the numeric ID using the format "%u". -r Output the real ID instead of the effective ID. -u Output only the effective user ID, using the format "%u\n". | # id
> Display current user and group identity. More information:
> https://www.gnu.org/software/coreutils/id.
* Display current user's ID (UID), group ID (GID) and groups to which they belong:
`id`
* Display the current user identity as a number:
`id -u`
* Display the current group identity as a number:
`id -g`
* Display an arbitrary user's ID (UID), group ID (GID) and groups to which they belong:
`id {{username}}` |
nl | Write each FILE to standard output, with line numbers added. With no FILE, or when FILE is -, read standard input. Mandatory arguments to long options are mandatory for short options too. -b, --body-numbering=STYLE use STYLE for numbering body lines -d, --section-delimiter=CC use CC for logical page delimiters -f, --footer-numbering=STYLE use STYLE for numbering footer lines -h, --header-numbering=STYLE use STYLE for numbering header lines -i, --line-increment=NUMBER line number increment at each line -l, --join-blank-lines=NUMBER group of NUMBER empty lines counted as one -n, --number-format=FORMAT insert line numbers according to FORMAT -p, --no-renumber do not reset line numbers for each section -s, --number-separator=STRING add STRING after (possible) line number -v, --starting-line-number=NUMBER first line number for each section -w, --number-width=NUMBER use NUMBER columns for line numbers --help display this help and exit --version output version information and exit Default options are: -bt -d'\:' -fn -hn -i1 -l1 -n'rn' -s<TAB> -v1 -w6 CC are two delimiter characters used to construct logical page delimiters; a missing second character implies ':'. As a GNU extension one can specify more than two characters, and also specifying the empty string (-d '') disables section matching. STYLE is one of: a number all lines t number only nonempty lines n number no lines pBRE number only lines that contain a match for the basic regular expression, BRE FORMAT is one of: ln left justified, no leading zeros rn right justified, no leading zeros rz right justified, leading zeros | # nl
> A utility for numbering lines, either from a file, or from `stdin`. More
> information: https://www.gnu.org/software/coreutils/nl.
* Number non-blank lines in a file:
`nl {{path/to/file}}`
* Read from `stdout`:
`cat {{path/to/file}} | nl {{options}} -`
* Number only the lines with printable text:
`nl -t {{path/to/file}}`
* Number all lines including blank lines:
`nl -b a {{path/to/file}}`
* Number only the body lines that match a basic regular expression (BRE) pattern:
`nl -b p'FooBar[0-9]' {{path/to/file}}` |
git-check-ignore | For each pathname given via the command-line or from a file via --stdin, check whether the file is excluded by .gitignore (or other input files to the exclude mechanism) and output the path if it is excluded. By default, tracked files are not shown at all since they are not subject to exclude rules; but see ‘--no-index’. -q, --quiet Don’t output anything, just set exit status. This is only valid with a single pathname. -v, --verbose Instead of printing the paths that are excluded, for each path that matches an exclude pattern, print the exclude pattern together with the path. (Matching an exclude pattern usually means the path is excluded, but if the pattern begins with "!" then it is a negated pattern and matching it means the path is NOT excluded.) For precedence rules within and between exclude sources, see gitignore(5). --stdin Read pathnames from the standard input, one per line, instead of from the command-line. -z The output format is modified to be machine-parsable (see below). If --stdin is also given, input paths are separated with a NUL character instead of a linefeed character. -n, --non-matching Show given paths which don’t match any pattern. This only makes sense when --verbose is enabled, otherwise it would not be possible to distinguish between paths which match a pattern and those which don’t. --no-index Don’t look in the index when undertaking the checks. This can be used to debug why a path became tracked by e.g. git add . and was not ignored by the rules as expected by the user or when developing patterns including negation to match a path previously added with git add -f. | # git check-ignore
> Analyze and debug Git ignore/exclude (".gitignore") files. More information:
> https://git-scm.com/docs/git-check-ignore.
* Check whether a file or directory is ignored:
`git check-ignore {{path/to/file_or_directory}}`
* Check whether multiple files or directories are ignored:
`git check-ignore {{path/to/file}} {{path/to/directory}}`
* Use pathnames, one per line, from `stdin`:
`git check-ignore --stdin < {{path/to/file_list}}`
* Do not check the index (used to debug why paths were tracked and not ignored):
`git check-ignore --no-index {{path/to/files_or_directories}}`
* Include details about the matching pattern for each path:
`git check-ignore --verbose {{path/to/files_or_directories}}` |
tcpdump | Tcpdump prints out a description of the contents of packets on a network interface that match the Boolean expression (see pcap-filter(@MAN_MISC_INFO@) for the expression syntax); the description is preceded by a time stamp, printed, by default, as hours, minutes, seconds, and fractions of a second since midnight. It can also be run with the -w flag, which causes it to save the packet data to a file for later analysis, and/or with the -r flag, which causes it to read from a saved packet file rather than to read packets from a network interface. It can also be run with the -V flag, which causes it to read a list of saved packet files. In all cases, only packets that match expression will be processed by tcpdump. Tcpdump will, if not run with the -c flag, continue capturing packets until it is interrupted by a SIGINT signal (generated, for example, by typing your interrupt character, typically control-C) or a SIGTERM signal (typically generated with the kill(1) command); if run with the -c flag, it will capture packets until it is interrupted by a SIGINT or SIGTERM signal or the specified number of packets have been processed. When tcpdump finishes capturing packets, it will report counts of: packets ``captured'' (this is the number of packets that tcpdump has received and processed); packets ``received by filter'' (the meaning of this depends on the OS on which you're running tcpdump, and possibly on the way the OS was configured - if a filter was specified on the command line, on some OSes it counts packets regardless of whether they were matched by the filter expression and, even if they were matched by the filter expression, regardless of whether tcpdump has read and processed them yet, on other OSes it counts only packets that were matched by the filter expression regardless of whether tcpdump has read and processed them yet, and on other OSes it counts only packets that were matched by the filter expression and were processed by tcpdump); packets ``dropped by kernel'' (this is the number of packets that were dropped, due to a lack of buffer space, by the packet capture mechanism in the OS on which tcpdump is running, if the OS reports that information to applications; if not, it will be reported as 0). On platforms that support the SIGINFO signal, such as most BSDs (including macOS) and Digital/Tru64 UNIX, it will report those counts when it receives a SIGINFO signal (generated, for example, by typing your ``status'' character, typically control-T, although on some platforms, such as macOS, the ``status'' character is not set by default, so you must set it with stty(1) in order to use it) and will continue capturing packets. On platforms that do not support the SIGINFO signal, the same can be achieved by using the SIGUSR1 signal. Using the SIGUSR2 signal along with the -w flag will forcibly flush the packet buffer into the output file. Reading packets from a network interface may require that you have special privileges; see the pcap(3PCAP) man page for details. Reading a saved packet file doesn't require special privileges. -A Print each packet (minus its link level header) in ASCII. Handy for capturing web pages. -b Print the AS number in BGP packets in ASDOT notation rather than ASPLAIN notation. -B buffer_size --buffer-size=buffer_size Set the operating system capture buffer size to buffer_size, in units of KiB (1024 bytes). -c count Exit after receiving count packets. --count Print only on stdout the packet count when reading capture file(s) instead of parsing/printing the packets. If a filter is specified on the command line, tcpdump counts only packets that were matched by the filter expression. -C file_size Before writing a raw packet to a savefile, check whether the file is currently larger than file_size and, if so, close the current savefile and open a new one. Savefiles after the first savefile will have the name specified with the -w flag, with a number after it, starting at 1 and continuing upward. The default unit of file_size is millions of bytes (1,000,000 bytes, not 1,048,576 bytes). By adding a suffix of k/K, m/M or g/G to the value, the unit can be changed to 1,024 (KiB), 1,048,576 (MiB), or 1,073,741,824 (GiB) respectively. -d Dump the compiled packet-matching code in a human readable form to standard output and stop. Please mind that although code compilation is always DLT- specific, typically it is impossible (and unnecessary) to specify which DLT to use for the dump because tcpdump uses either the DLT of the input pcap file specified with -r, or the default DLT of the network interface specified with -i, or the particular DLT of the network interface specified with -y and -i respectively. In these cases the dump shows the same exact code that would filter the input file or the network interface without -d. However, when neither -r nor -i is specified, specifying -d prevents tcpdump from guessing a suitable network interface (see -i). In this case the DLT defaults to EN10MB and can be set to another valid value manually with -y. -dd Dump packet-matching code as a C program fragment. -ddd Dump packet-matching code as decimal numbers (preceded with a count). -D --list-interfaces Print the list of the network interfaces available on the system and on which tcpdump can capture packets. For each network interface, a number and an interface name, possibly followed by a text description of the interface, are printed. The interface name or the number can be supplied to the -i flag to specify an interface on which to capture. This can be useful on systems that don't have a command to list them (e.g., Windows systems, or UNIX systems lacking ifconfig -a); the number can be useful on Windows 2000 and later systems, where the interface name is a somewhat complex string. The -D flag will not be supported if tcpdump was built with an older version of libpcap that lacks the pcap_findalldevs(3PCAP) function. -e Print the link-level header on each dump line. This can be used, for example, to print MAC layer addresses for protocols such as Ethernet and IEEE 802.11. -E Use spi@ipaddr algo:secret for decrypting IPsec ESP packets that are addressed to addr and contain Security Parameter Index value spi. This combination may be repeated with comma or newline separation. Note that setting the secret for IPv4 ESP packets is supported at this time. Algorithms may be des-cbc, 3des-cbc, blowfish-cbc, rc3-cbc, cast128-cbc, or none. The default is des-cbc. The ability to decrypt packets is only present if tcpdump was compiled with cryptography enabled. secret is the ASCII text for ESP secret key. If preceded by 0x, then a hex value will be read. The option assumes RFC 2406 ESP, not RFC 1827 ESP. The option is only for debugging purposes, and the use of this option with a true `secret' key is discouraged. By presenting IPsec secret key onto command line you make it visible to others, via ps(1) and other occasions. In addition to the above syntax, the syntax file name may be used to have tcpdump read the provided file in. The file is opened upon receiving the first ESP packet, so any special permissions that tcpdump may have been given should already have been given up. -f Print `foreign' IPv4 addresses numerically rather than symbolically (this option is intended to get around serious brain damage in Sun's NIS server — usually it hangs forever translating non-local internet numbers). The test for `foreign' IPv4 addresses is done using the IPv4 address and netmask of the interface on that capture is being done. If that address or netmask are not available, either because the interface on that capture is being done has no address or netmask or because it is the "any" pseudo-interface, which is available in Linux and in recent versions of macOS and Solaris, and which can capture on more than one interface, this option will not work correctly. -F file Use file as input for the filter expression. An additional expression given on the command line is ignored. -G rotate_seconds If specified, rotates the dump file specified with the -w option every rotate_seconds seconds. Savefiles will have the name specified by -w which should include a time format as defined by strftime(3). If no time format is specified, each new file will overwrite the previous. Whenever a generated filename is not unique, tcpdump will overwrite the pre-existing data; providing a time specification that is coarser than the capture period is therefore not advised. If used in conjunction with the -C option, filenames will take the form of `file<count>'. -h --help Print the tcpdump and libpcap version strings, print a usage message, and exit. --version Print the tcpdump and libpcap version strings and exit. -H Attempt to detect 802.11s draft mesh headers. -i interface --interface=interface Listen, report the list of link-layer types, report the list of time stamp types, or report the results of compiling a filter expression on interface. If unspecified and if the -d flag is not given, tcpdump searches the system interface list for the lowest numbered, configured up interface (excluding loopback), which may turn out to be, for example, ``eth0''. On Linux systems with 2.2 or later kernels and on recent versions of macOS and Solaris, an interface argument of ``any'' can be used to capture packets from all interfaces. Note that captures on the ``any'' pseudo- interface will not be done in promiscuous mode. If the -D flag is supported, an interface number as printed by that flag can be used as the interface argument, if no interface on the system has that number as a name. -I --monitor-mode Put the interface in "monitor mode"; this is supported only on IEEE 802.11 Wi-Fi interfaces, and supported only on some operating systems. Note that in monitor mode the adapter might disassociate from the network with which it's associated, so that you will not be able to use any wireless networks with that adapter. This could prevent accessing files on a network server, or resolving host names or network addresses, if you are capturing in monitor mode and are not connected to another network with another adapter. This flag will affect the output of the -L flag. If -I isn't specified, only those link-layer types available when not in monitor mode will be shown; if -I is specified, only those link-layer types available when in monitor mode will be shown. --immediate-mode Capture in "immediate mode". In this mode, packets are delivered to tcpdump as soon as they arrive, rather than being buffered for efficiency. This is the default when printing packets rather than saving packets to a ``savefile'' if the packets are being printed to a terminal rather than to a file or pipe. -j tstamp_type --time-stamp-type=tstamp_type Set the time stamp type for the capture to tstamp_type. The names to use for the time stamp types are given in pcap-tstamp(@MAN_MISC_INFO@); not all the types listed there will necessarily be valid for any given interface. -J --list-time-stamp-types List the supported time stamp types for the interface and exit. If the time stamp type cannot be set for the interface, no time stamp types are listed. --time-stamp-precision=tstamp_precision When capturing, set the time stamp precision for the capture to tstamp_precision. Note that availability of high precision time stamps (nanoseconds) and their actual accuracy is platform and hardware dependent. Also note that when writing captures made with nanosecond accuracy to a savefile, the time stamps are written with nanosecond resolution, and the file is written with a different magic number, to indicate that the time stamps are in seconds and nanoseconds; not all programs that read pcap savefiles will be able to read those captures. When reading a savefile, convert time stamps to the precision specified by timestamp_precision, and display them with that resolution. If the precision specified is less than the precision of time stamps in the file, the conversion will lose precision. The supported values for timestamp_precision are micro for microsecond resolution and nano for nanosecond resolution. The default is microsecond resolution. --micro --nano Shorthands for --time-stamp-precision=micro or --time-stamp-precision=nano, adjusting the time stamp precision accordingly. When reading packets from a savefile, using --micro truncates time stamps if the savefile was created with nanosecond precision. In contrast, a savefile created with microsecond precision will have trailing zeroes added to the time stamp when --nano is used. -K --dont-verify-checksums Don't attempt to verify IP, TCP, or UDP checksums. This is useful for interfaces that perform some or all of those checksum calculation in hardware; otherwise, all outgoing TCP checksums will be flagged as bad. -l Make stdout line buffered. Useful if you want to see the data while capturing it. E.g., tcpdump -l | tee dat or tcpdump -l > dat & tail -f dat Note that on Windows,``line buffered'' means ``unbuffered'', so that WinDump will write each character individually if -l is specified. -U is similar to -l in its behavior, but it will cause output to be ``packet-buffered'', so that the output is written to stdout at the end of each packet rather than at the end of each line; this is buffered on all platforms, including Windows. -L --list-data-link-types List the known data link types for the interface, in the specified mode, and exit. The list of known data link types may be dependent on the specified mode; for example, on some platforms, a Wi-Fi interface might support one set of data link types when not in monitor mode (for example, it might support only fake Ethernet headers, or might support 802.11 headers but not support 802.11 headers with radio information) and another set of data link types when in monitor mode (for example, it might support 802.11 headers, or 802.11 headers with radio information, only in monitor mode). -m module Load SMI MIB module definitions from file module. This option can be used several times to load several MIB modules into tcpdump. -M secret Use secret as a shared secret for validating the digests found in TCP segments with the TCP-MD5 option (RFC 2385), if present. -n Don't convert addresses (i.e., host addresses, port numbers, etc.) to names. -N Don't print domain name qualification of host names. E.g., if you give this flag then tcpdump will print ``nic'' instead of ``nic.ddn.mil''. -# --number Print an optional packet number at the beginning of the line. -O --no-optimize Do not run the packet-matching code optimizer. This is useful only if you suspect a bug in the optimizer. -p --no-promiscuous-mode Don't put the interface into promiscuous mode. Note that the interface might be in promiscuous mode for some other reason; hence, `-p' cannot be used as an abbreviation for `ether host {local-hw-addr} or ether broadcast'. --print Print parsed packet output, even if the raw packets are being saved to a file with the -w flag. --print-sampling=nth Print every nth packet. This option enables the --print flag. Unprinted packets are not parsed, which decreases processing time. Setting nth to 100 for example, will (counting from 1) parse and print the 100th packet, 200th packet, 300th packet, and so on. This option also enables the -S flag, as relative TCP sequence numbers are not tracked for unprinted packets. -Q direction --direction=direction Choose send/receive direction direction for which packets should be captured. Possible values are `in', `out' and `inout'. Not available on all platforms. -q Quick (quiet?) output. Print less protocol information so output lines are shorter. -r file Read packets from file (which was created with the -w option or by other tools that write pcap or pcapng files). Standard input is used if file is ``-''. -S --absolute-tcp-sequence-numbers Print absolute, rather than relative, TCP sequence numbers. -s snaplen --snapshot-length=snaplen Snarf snaplen bytes of data from each packet rather than the default of 262144 bytes. Packets truncated because of a limited snapshot are indicated in the output with ``[|proto]'', where proto is the name of the protocol level at which the truncation has occurred. Note that taking larger snapshots both increases the amount of time it takes to process packets and, effectively, decreases the amount of packet buffering. This may cause packets to be lost. Note also that taking smaller snapshots will discard data from protocols above the transport layer, which loses information that may be important. NFS and AFS requests and replies, for example, are very large, and much of the detail won't be available if a too-short snapshot length is selected. If you need to reduce the snapshot size below the default, you should limit snaplen to the smallest number that will capture the protocol information you're interested in. Setting snaplen to 0 sets it to the default of 262144, for backwards compatibility with recent older versions of tcpdump. -T type Force packets selected by "expression" to be interpreted the specified type. Currently known types are aodv (Ad- hoc On-demand Distance Vector protocol), carp (Common Address Redundancy Protocol), cnfp (Cisco NetFlow protocol), domain (Domain Name System), lmp (Link Management Protocol), pgm (Pragmatic General Multicast), pgm_zmtp1 (ZMTP/1.0 inside PGM/EPGM), ptp (Precision Time Protocol), quic (QUIC), radius (RADIUS), resp (REdis Serialization Protocol), rpc (Remote Procedure Call), rtcp (Real-Time Applications control protocol), rtp (Real-Time Applications protocol), snmp (Simple Network Management Protocol), someip (SOME/IP), tftp (Trivial File Transfer Protocol), vat (Visual Audio Tool), vxlan (Virtual eXtensible Local Area Network), wb (distributed White Board) and zmtp1 (ZeroMQ Message Transport Protocol 1.0). Note that the pgm type above affects UDP interpretation only, the native PGM is always recognised as IP protocol 113 regardless. UDP-encapsulated PGM is often called "EPGM" or "PGM/UDP". Note that the pgm_zmtp1 type above affects interpretation of both native PGM and UDP at once. During the native PGM decoding the application data of an ODATA/RDATA packet would be decoded as a ZeroMQ datagram with ZMTP/1.0 frames. During the UDP decoding in addition to that any UDP packet would be treated as an encapsulated PGM packet. -t Don't print a timestamp on each dump line. -tt Print the timestamp, as seconds since January 1, 1970, 00:00:00, UTC, and fractions of a second since that time, on each dump line. -ttt Print a delta (microsecond or nanosecond resolution depending on the --time-stamp-precision option) between current and previous line on each dump line. The default is microsecond resolution. -tttt Print a timestamp, as hours, minutes, seconds, and fractions of a second since midnight, preceded by the date, on each dump line. -ttttt Print a delta (microsecond or nanosecond resolution depending on the --time-stamp-precision option) between current and first line on each dump line. The default is microsecond resolution. -u Print undecoded NFS handles. -U --packet-buffered If the -w option is not specified, or if it is specified but the --print flag is also specified, make the printed packet output ``packet-buffered''; i.e., as the description of the contents of each packet is printed, it will be written to the standard output, rather than, when not writing to a terminal, being written only when the output buffer fills. If the -w option is specified, make the saved raw packet output ``packet-buffered''; i.e., as each packet is saved, it will be written to the output file, rather than being written only when the output buffer fills. The -U flag will not be supported if tcpdump was built with an older version of libpcap that lacks the pcap_dump_flush(3PCAP) function. -v When parsing and printing, produce (slightly more) verbose output. For example, the time to live, identification, total length and options in an IP packet are printed. Also enables additional packet integrity checks such as verifying the IP and ICMP header checksum. When writing to a file with the -w option and at the same time not reading from a file with the -r option, report to stderr, once per second, the number of packets captured. In Solaris, FreeBSD and possibly other operating systems this periodic update currently can cause loss of captured packets on their way from the kernel to tcpdump. -vv Even more verbose output. For example, additional fields are printed from NFS reply packets, and SMB packets are fully decoded. -vvv Even more verbose output. For example, telnet SB ... SE options are printed in full. With -X Telnet options are printed in hex as well. -V file Read a list of filenames from file. Standard input is used if file is ``-''. -w file Write the raw packets to file rather than parsing and printing them out. They can later be printed with the -r option. Standard output is used if file is ``-''. This output will be buffered if written to a file or pipe, so a program reading from the file or pipe may not see packets for an arbitrary amount of time after they are received. Use the -U flag to cause packets to be written as soon as they are received. The MIME type application/vnd.tcpdump.pcap has been registered with IANA for pcap files. The filename extension .pcap appears to be the most commonly used along with .cap and .dmp. Tcpdump itself doesn't check the extension when reading capture files and doesn't add an extension when writing them (it uses magic numbers in the file header instead). However, many operating systems and applications will use the extension if it is present and adding one (e.g. .pcap) is recommended. See pcap-savefile(@MAN_FILE_FORMATS@) for a description of the file format. -W filecount Used in conjunction with the -C option, this will limit the number of files created to the specified number, and begin overwriting files from the beginning, thus creating a 'rotating' buffer. In addition, it will name the files with enough leading 0s to support the maximum number of files, allowing them to sort correctly. Used in conjunction with the -G option, this will limit the number of rotated dump files that get created, exiting with status 0 when reaching the limit. If used in conjunction with both -C and -G, the -W option will currently be ignored, and will only affect the file name. -x When parsing and printing, in addition to printing the headers of each packet, print the data of each packet (minus its link level header) in hex. The smaller of the entire packet or snaplen bytes will be printed. Note that this is the entire link-layer packet, so for link layers that pad (e.g. Ethernet), the padding bytes will also be printed when the higher layer packet is shorter than the required padding. In the current implementation this flag may have the same effect as -xx if the packet is truncated. -xx When parsing and printing, in addition to printing the headers of each packet, print the data of each packet, including its link level header, in hex. -X When parsing and printing, in addition to printing the headers of each packet, print the data of each packet (minus its link level header) in hex and ASCII. This is very handy for analysing new protocols. In the current implementation this flag may have the same effect as -XX if the packet is truncated. -XX When parsing and printing, in addition to printing the headers of each packet, print the data of each packet, including its link level header, in hex and ASCII. -y datalinktype --linktype=datalinktype Set the data link type to use while capturing packets (see -L) or just compiling and dumping packet-matching code (see -d) to datalinktype. -z postrotate-command Used in conjunction with the -C or -G options, this will make tcpdump run " postrotate-command file " where file is the savefile being closed after each rotation. For example, specifying -z gzip or -z bzip2 will compress each savefile using gzip or bzip2. Note that tcpdump will run the command in parallel to the capture, using the lowest priority so that this doesn't disturb the capture process. And in case you would like to use a command that itself takes flags or different arguments, you can always write a shell script that will take the savefile name as the only argument, make the flags & arguments arrangements and execute the command that you want. -Z user --relinquish-privileges=user If tcpdump is running as root, after opening the capture device or input savefile, but before opening any savefiles for output, change the user ID to user and the group ID to the primary group of user. This behavior can also be enabled by default at compile time. expression selects which packets will be dumped. If no expression is given, all packets on the net will be dumped. Otherwise, only packets for which expression is `true' will be dumped. For the expression syntax, see pcap-filter(@MAN_MISC_INFO@). The expression argument can be passed to tcpdump as either a single Shell argument, or as multiple Shell arguments, whichever is more convenient. Generally, if the expression contains Shell metacharacters, such as backslashes used to escape protocol names, it is easier to pass it as a single, quoted argument rather than to escape the Shell metacharacters. Multiple arguments are concatenated with spaces before being parsed. | # tcpdump
> Dump traffic on a network. More information: https://www.tcpdump.org.
* List available network interfaces:
`tcpdump -D`
* Capture the traffic of a specific interface:
`tcpdump -i {{eth0}}`
* Capture all TCP traffic showing contents (ASCII) in console:
`tcpdump -A tcp`
* Capture the traffic from or to a host:
`tcpdump host {{www.example.com}}`
* Capture the traffic from a specific interface, source, destination and destination port:
`tcpdump -i {{eth0}} src {{192.168.1.1}} and dst {{192.168.1.2}} and dst port
{{80}}`
* Capture the traffic of a network:
`tcpdump net {{192.168.1.0/24}}`
* Capture all traffic except traffic over port 22 and save to a dump file:
`tcpdump -w {{dumpfile.pcap}} port not {{22}}`
* Read from a given dump file:
`tcpdump -r {{dumpfile.pcap}}` |
users | Output who is currently logged in according to FILE. If FILE is not specified, use /var/run/utmp. /var/log/wtmp as FILE is common. --help display this help and exit --version output version information and exit | # users
> Display a list of logged in users. See also: `useradd`, `userdel`,
> `usermod`. More information: https://www.gnu.org/software/coreutils/users.
* Print logged in usernames:
`users`
* Print logged in usernames according to a given file:
`users {{/var/log/wmtp}}` |
git-rev-list | List commits that are reachable by following the parent links from the given commit(s), but exclude commits that are reachable from the one(s) given with a ^ in front of them. The output is given in reverse chronological order by default. You can think of this as a set operation. Commits reachable from any of the commits given on the command line form a set, and then commits reachable from any of the ones given with ^ in front are subtracted from that set. The remaining commits are what comes out in the command’s output. Various other options and paths parameters can be used to further limit the result. Thus, the following command: $ git rev-list foo bar ^baz means "list all the commits which are reachable from foo or bar, but not from baz". A special notation "<commit1>..<commit2>" can be used as a short-hand for "^<commit1> <commit2>". For example, either of the following may be used interchangeably: $ git rev-list origin..HEAD $ git rev-list HEAD ^origin Another special notation is "<commit1>...<commit2>" which is useful for merges. The resulting set of commits is the symmetric difference between the two operands. The following two commands are equivalent: $ git rev-list A B --not $(git merge-base --all A B) $ git rev-list A...B rev-list is a very essential Git command, since it provides the ability to build and traverse commit ancestry graphs. For this reason, it has a lot of different options that enables it to be used by commands as different as git bisect and git repack. Commit Limiting Besides specifying a range of commits that should be listed using the special notations explained in the description, additional commit limiting may be applied. Using more options generally further limits the output (e.g. --since=<date1> limits to commits newer than <date1>, and using it with --grep=<pattern> further limits to commits whose log message has a line that matches <pattern>), unless otherwise noted. Note that these are applied before commit ordering and formatting options, such as --reverse. -<number>, -n <number>, --max-count=<number> Limit the number of commits to output. --skip=<number> Skip number commits before starting to show the commit output. --since=<date>, --after=<date> Show commits more recent than a specific date. --since-as-filter=<date> Show all commits more recent than a specific date. This visits all commits in the range, rather than stopping at the first commit which is older than a specific date. --until=<date>, --before=<date> Show commits older than a specific date. --max-age=<timestamp>, --min-age=<timestamp> Limit the commits output to specified time range. --author=<pattern>, --committer=<pattern> Limit the commits output to ones with author/committer header lines that match the specified pattern (regular expression). With more than one --author=<pattern>, commits whose author matches any of the given patterns are chosen (similarly for multiple --committer=<pattern>). --grep-reflog=<pattern> Limit the commits output to ones with reflog entries that match the specified pattern (regular expression). With more than one --grep-reflog, commits whose reflog message matches any of the given patterns are chosen. It is an error to use this option unless --walk-reflogs is in use. --grep=<pattern> Limit the commits output to ones with log message that matches the specified pattern (regular expression). With more than one --grep=<pattern>, commits whose message matches any of the given patterns are chosen (but see --all-match). --all-match Limit the commits output to ones that match all given --grep, instead of ones that match at least one. --invert-grep Limit the commits output to ones with log message that do not match the pattern specified with --grep=<pattern>. -i, --regexp-ignore-case Match the regular expression limiting patterns without regard to letter case. --basic-regexp Consider the limiting patterns to be basic regular expressions; this is the default. -E, --extended-regexp Consider the limiting patterns to be extended regular expressions instead of the default basic regular expressions. -F, --fixed-strings Consider the limiting patterns to be fixed strings (don’t interpret pattern as a regular expression). -P, --perl-regexp Consider the limiting patterns to be Perl-compatible regular expressions. Support for these types of regular expressions is an optional compile-time dependency. If Git wasn’t compiled with support for them providing this option will cause it to die. --remove-empty Stop when a given path disappears from the tree. --merges Print only merge commits. This is exactly the same as --min-parents=2. --no-merges Do not print commits with more than one parent. This is exactly the same as --max-parents=1. --min-parents=<number>, --max-parents=<number>, --no-min-parents, --no-max-parents Show only commits which have at least (or at most) that many parent commits. In particular, --max-parents=1 is the same as --no-merges, --min-parents=2 is the same as --merges. --max-parents=0 gives all root commits and --min-parents=3 all octopus merges. --no-min-parents and --no-max-parents reset these limits (to no limit) again. Equivalent forms are --min-parents=0 (any commit has 0 or more parents) and --max-parents=-1 (negative numbers denote no upper limit). --first-parent When finding commits to include, follow only the first parent commit upon seeing a merge commit. This option can give a better overview when viewing the evolution of a particular topic branch, because merges into a topic branch tend to be only about adjusting to updated upstream from time to time, and this option allows you to ignore the individual commits brought in to your history by such a merge. --exclude-first-parent-only When finding commits to exclude (with a ^), follow only the first parent commit upon seeing a merge commit. This can be used to find the set of changes in a topic branch from the point where it diverged from the remote branch, given that arbitrary merges can be valid topic branch changes. --not Reverses the meaning of the ^ prefix (or lack thereof) for all following revision specifiers, up to the next --not. --all Pretend as if all the refs in refs/, along with HEAD, are listed on the command line as <commit>. --branches[=<pattern>] Pretend as if all the refs in refs/heads are listed on the command line as <commit>. If <pattern> is given, limit branches to ones matching given shell glob. If pattern lacks ?, *, or [, /* at the end is implied. --tags[=<pattern>] Pretend as if all the refs in refs/tags are listed on the command line as <commit>. If <pattern> is given, limit tags to ones matching given shell glob. If pattern lacks ?, *, or [, /* at the end is implied. --remotes[=<pattern>] Pretend as if all the refs in refs/remotes are listed on the command line as <commit>. If <pattern> is given, limit remote-tracking branches to ones matching given shell glob. If pattern lacks ?, *, or [, /* at the end is implied. --glob=<glob-pattern> Pretend as if all the refs matching shell glob <glob-pattern> are listed on the command line as <commit>. Leading refs/, is automatically prepended if missing. If pattern lacks ?, *, or [, /* at the end is implied. --exclude=<glob-pattern> Do not include refs matching <glob-pattern> that the next --all, --branches, --tags, --remotes, or --glob would otherwise consider. Repetitions of this option accumulate exclusion patterns up to the next --all, --branches, --tags, --remotes, or --glob option (other options or arguments do not clear accumulated patterns). The patterns given should not begin with refs/heads, refs/tags, or refs/remotes when applied to --branches, --tags, or --remotes, respectively, and they must begin with refs/ when applied to --glob or --all. If a trailing /* is intended, it must be given explicitly. --exclude-hidden=[fetch|receive|uploadpack] Do not include refs that would be hidden by git-fetch, git-receive-pack or git-upload-pack by consulting the appropriate fetch.hideRefs, receive.hideRefs or uploadpack.hideRefs configuration along with transfer.hideRefs (see git-config(1)). This option affects the next pseudo-ref option --all or --glob and is cleared after processing them. --reflog Pretend as if all objects mentioned by reflogs are listed on the command line as <commit>. --alternate-refs Pretend as if all objects mentioned as ref tips of alternate repositories were listed on the command line. An alternate repository is any repository whose object directory is specified in objects/info/alternates. The set of included objects may be modified by core.alternateRefsCommand, etc. See git-config(1). --single-worktree By default, all working trees will be examined by the following options when there are more than one (see git-worktree(1)): --all, --reflog and --indexed-objects. This option forces them to examine the current working tree only. --ignore-missing Upon seeing an invalid object name in the input, pretend as if the bad input was not given. --stdin In addition to the <commit> listed on the command line, read them from the standard input. If a -- separator is seen, stop reading commits and start reading paths to limit the result. --quiet Don’t print anything to standard output. This form is primarily meant to allow the caller to test the exit status to see if a range of objects is fully connected (or not). It is faster than redirecting stdout to /dev/null as the output does not have to be formatted. --disk-usage, --disk-usage=human Suppress normal output; instead, print the sum of the bytes used for on-disk storage by the selected commits or objects. This is equivalent to piping the output into git cat-file --batch-check='%(objectsize:disk)', except that it runs much faster (especially with --use-bitmap-index). See the CAVEATS section in git-cat-file(1) for the limitations of what "on-disk storage" means. With the optional value human, on-disk storage size is shown in human-readable string(e.g. 12.24 Kib, 3.50 Mib). --cherry-mark Like --cherry-pick (see below) but mark equivalent commits with = rather than omitting them, and inequivalent ones with +. --cherry-pick Omit any commit that introduces the same change as another commit on the “other side” when the set of commits are limited with symmetric difference. For example, if you have two branches, A and B, a usual way to list all commits on only one side of them is with --left-right (see the example below in the description of the --left-right option). However, it shows the commits that were cherry-picked from the other branch (for example, “3rd on b” may be cherry-picked from branch A). With this option, such pairs of commits are excluded from the output. --left-only, --right-only List only commits on the respective side of a symmetric difference, i.e. only those which would be marked < resp. > by --left-right. For example, --cherry-pick --right-only A...B omits those commits from B which are in A or are patch-equivalent to a commit in A. In other words, this lists the + commits from git cherry A B. More precisely, --cherry-pick --right-only --no-merges gives the exact list. --cherry A synonym for --right-only --cherry-mark --no-merges; useful to limit the output to the commits on our side and mark those that have been applied to the other side of a forked history with git log --cherry upstream...mybranch, similar to git cherry upstream mybranch. -g, --walk-reflogs Instead of walking the commit ancestry chain, walk reflog entries from the most recent one to older ones. When this option is used you cannot specify commits to exclude (that is, ^commit, commit1..commit2, and commit1...commit2 notations cannot be used). With --pretty format other than oneline and reference (for obvious reasons), this causes the output to have two extra lines of information taken from the reflog. The reflog designator in the output may be shown as ref@{Nth} (where Nth is the reverse-chronological index in the reflog) or as ref@{timestamp} (with the timestamp for that entry), depending on a few rules: 1. If the starting point is specified as ref@{Nth}, show the index format. 2. If the starting point was specified as ref@{now}, show the timestamp format. 3. If neither was used, but --date was given on the command line, show the timestamp in the format requested by --date. 4. Otherwise, show the index format. Under --pretty=oneline, the commit message is prefixed with this information on the same line. This option cannot be combined with --reverse. See also git-reflog(1). Under --pretty=reference, this information will not be shown at all. --merge After a failed merge, show refs that touch files having a conflict and don’t exist on all heads to merge. --boundary Output excluded boundary commits. Boundary commits are prefixed with -. --use-bitmap-index Try to speed up the traversal using the pack bitmap index (if one is available). Note that when traversing with --objects, trees and blobs will not have their associated path printed. --progress=<header> Show progress reports on stderr as objects are considered. The <header> text will be printed with each progress update. History Simplification Sometimes you are only interested in parts of the history, for example the commits modifying a particular <path>. But there are two parts of History Simplification, one part is selecting the commits and the other is how to do it, as there are various strategies to simplify the history. The following options select the commits to be shown: <paths> Commits modifying the given <paths> are selected. --simplify-by-decoration Commits that are referred by some branch or tag are selected. Note that extra commits can be shown to give a meaningful history. The following options affect the way the simplification is performed: Default mode Simplifies the history to the simplest history explaining the final state of the tree. Simplest because it prunes some side branches if the end result is the same (i.e. merging branches with the same content) --show-pulls Include all commits from the default mode, but also any merge commits that are not TREESAME to the first parent but are TREESAME to a later parent. This mode is helpful for showing the merge commits that "first introduced" a change to a branch. --full-history Same as the default mode, but does not prune some history. --dense Only the selected commits are shown, plus some to have a meaningful history. --sparse All commits in the simplified history are shown. --simplify-merges Additional option to --full-history to remove some needless merges from the resulting history, as there are no selected commits contributing to this merge. --ancestry-path[=<commit>] When given a range of commits to display (e.g. commit1..commit2 or commit2 ^commit1), only display commits in that range that are ancestors of <commit>, descendants of <commit>, or <commit> itself. If no commit is specified, use commit1 (the excluded part of the range) as <commit>. Can be passed multiple times; if so, a commit is included if it is any of the commits given or if it is an ancestor or descendant of one of them. A more detailed explanation follows. Suppose you specified foo as the <paths>. We shall call commits that modify foo !TREESAME, and the rest TREESAME. (In a diff filtered for foo, they look different and equal, respectively.) In the following, we will always refer to the same example history to illustrate the differences between simplification settings. We assume that you are filtering for a file foo in this commit graph: .-A---M---N---O---P---Q / / / / / / I B C D E Y \ / / / / / `-------------' X The horizontal line of history A---Q is taken to be the first parent of each merge. The commits are: • I is the initial commit, in which foo exists with contents “asdf”, and a file quux exists with contents “quux”. Initial commits are compared to an empty tree, so I is !TREESAME. • In A, foo contains just “foo”. • B contains the same change as A. Its merge M is trivial and hence TREESAME to all parents. • C does not change foo, but its merge N changes it to “foobar”, so it is not TREESAME to any parent. • D sets foo to “baz”. Its merge O combines the strings from N and D to “foobarbaz”; i.e., it is not TREESAME to any parent. • E changes quux to “xyzzy”, and its merge P combines the strings to “quux xyzzy”. P is TREESAME to O, but not to E. • X is an independent root commit that added a new file side, and Y modified it. Y is TREESAME to X. Its merge Q added side to P, and Q is TREESAME to P, but not to Y. rev-list walks backwards through history, including or excluding commits based on whether --full-history and/or parent rewriting (via --parents or --children) are used. The following settings are available. Default mode Commits are included if they are not TREESAME to any parent (though this can be changed, see --sparse below). If the commit was a merge, and it was TREESAME to one parent, follow only that parent. (Even if there are several TREESAME parents, follow only one of them.) Otherwise, follow all parents. This results in: .-A---N---O / / / I---------D Note how the rule to only follow the TREESAME parent, if one is available, removed B from consideration entirely. C was considered via N, but is TREESAME. Root commits are compared to an empty tree, so I is !TREESAME. Parent/child relations are only visible with --parents, but that does not affect the commits selected in default mode, so we have shown the parent lines. --full-history without parent rewriting This mode differs from the default in one point: always follow all parents of a merge, even if it is TREESAME to one of them. Even if more than one side of the merge has commits that are included, this does not imply that the merge itself is! In the example, we get I A B N D O P Q M was excluded because it is TREESAME to both parents. E, C and B were all walked, but only B was !TREESAME, so the others do not appear. Note that without parent rewriting, it is not really possible to talk about the parent/child relationships between the commits, so we show them disconnected. --full-history with parent rewriting Ordinary commits are only included if they are !TREESAME (though this can be changed, see --sparse below). Merges are always included. However, their parent list is rewritten: Along each parent, prune away commits that are not included themselves. This results in .-A---M---N---O---P---Q / / / / / I B / D / \ / / / / `-------------' Compare to --full-history without rewriting above. Note that E was pruned away because it is TREESAME, but the parent list of P was rewritten to contain E's parent I. The same happened for C and N, and X, Y and Q. In addition to the above settings, you can change whether TREESAME affects inclusion: --dense Commits that are walked are included if they are not TREESAME to any parent. --sparse All commits that are walked are included. Note that without --full-history, this still simplifies merges: if one of the parents is TREESAME, we follow only that one, so the other sides of the merge are never walked. --simplify-merges First, build a history graph in the same way that --full-history with parent rewriting does (see above). Then simplify each commit C to its replacement C' in the final history according to the following rules: • Set C' to C. • Replace each parent P of C' with its simplification P'. In the process, drop parents that are ancestors of other parents or that are root commits TREESAME to an empty tree, and remove duplicates, but take care to never drop all parents that we are TREESAME to. • If after this parent rewriting, C' is a root or merge commit (has zero or >1 parents), a boundary commit, or !TREESAME, it remains. Otherwise, it is replaced with its only parent. The effect of this is best shown by way of comparing to --full-history with parent rewriting. The example turns into: .-A---M---N---O / / / I B D \ / / `---------' Note the major differences in N, P, and Q over --full-history: • N's parent list had I removed, because it is an ancestor of the other parent M. Still, N remained because it is !TREESAME. • P's parent list similarly had I removed. P was then removed completely, because it had one parent and is TREESAME. • Q's parent list had Y simplified to X. X was then removed, because it was a TREESAME root. Q was then removed completely, because it had one parent and is TREESAME. There is another simplification mode available: --ancestry-path[=<commit>] Limit the displayed commits to those which are an ancestor of <commit>, or which are a descendant of <commit>, or are <commit> itself. As an example use case, consider the following commit history: D---E-------F / \ \ B---C---G---H---I---J / \ A-------K---------------L--M A regular D..M computes the set of commits that are ancestors of M, but excludes the ones that are ancestors of D. This is useful to see what happened to the history leading to M since D, in the sense that “what does M have that did not exist in D”. The result in this example would be all the commits, except A and B (and D itself, of course). When we want to find out what commits in M are contaminated with the bug introduced by D and need fixing, however, we might want to view only the subset of D..M that are actually descendants of D, i.e. excluding C and K. This is exactly what the --ancestry-path option does. Applied to the D..M range, it results in: E-------F \ \ G---H---I---J \ L--M We can also use --ancestry-path=D instead of --ancestry-path which means the same thing when applied to the D..M range but is just more explicit. If we instead are interested in a given topic within this range, and all commits affected by that topic, we may only want to view the subset of D..M which contain that topic in their ancestry path. So, using --ancestry-path=H D..M for example would result in: E \ G---H---I---J \ L--M Whereas --ancestry-path=K D..M would result in K---------------L--M Before discussing another option, --show-pulls, we need to create a new example history. A common problem users face when looking at simplified history is that a commit they know changed a file somehow does not appear in the file’s simplified history. Let’s demonstrate a new example and show how options such as --full-history and --simplify-merges works in that case: .-A---M-----C--N---O---P / / \ \ \/ / / I B \ R-'`-Z' / \ / \/ / \ / /\ / `---X--' `---Y--' For this example, suppose I created file.txt which was modified by A, B, and X in different ways. The single-parent commits C, Z, and Y do not change file.txt. The merge commit M was created by resolving the merge conflict to include both changes from A and B and hence is not TREESAME to either. The merge commit R, however, was created by ignoring the contents of file.txt at M and taking only the contents of file.txt at X. Hence, R is TREESAME to X but not M. Finally, the natural merge resolution to create N is to take the contents of file.txt at R, so N is TREESAME to R but not C. The merge commits O and P are TREESAME to their first parents, but not to their second parents, Z and Y respectively. When using the default mode, N and R both have a TREESAME parent, so those edges are walked and the others are ignored. The resulting history graph is: I---X When using --full-history, Git walks every edge. This will discover the commits A and B and the merge M, but also will reveal the merge commits O and P. With parent rewriting, the resulting graph is: .-A---M--------N---O---P / / \ \ \/ / / I B \ R-'`--' / \ / \/ / \ / /\ / `---X--' `------' Here, the merge commits O and P contribute extra noise, as they did not actually contribute a change to file.txt. They only merged a topic that was based on an older version of file.txt. This is a common issue in repositories using a workflow where many contributors work in parallel and merge their topic branches along a single trunk: many unrelated merges appear in the --full-history results. When using the --simplify-merges option, the commits O and P disappear from the results. This is because the rewritten second parents of O and P are reachable from their first parents. Those edges are removed and then the commits look like single-parent commits that are TREESAME to their parent. This also happens to the commit N, resulting in a history view as follows: .-A---M--. / / \ I B R \ / / \ / / `---X--' In this view, we see all of the important single-parent changes from A, B, and X. We also see the carefully-resolved merge M and the not-so-carefully-resolved merge R. This is usually enough information to determine why the commits A and B "disappeared" from history in the default view. However, there are a few issues with this approach. The first issue is performance. Unlike any previous option, the --simplify-merges option requires walking the entire commit history before returning a single result. This can make the option difficult to use for very large repositories. The second issue is one of auditing. When many contributors are working on the same repository, it is important which merge commits introduced a change into an important branch. The problematic merge R above is not likely to be the merge commit that was used to merge into an important branch. Instead, the merge N was used to merge R and X into the important branch. This commit may have information about why the change X came to override the changes from A and B in its commit message. --show-pulls In addition to the commits shown in the default history, show each merge commit that is not TREESAME to its first parent but is TREESAME to a later parent. When a merge commit is included by --show-pulls, the merge is treated as if it "pulled" the change from another branch. When using --show-pulls on this example (and no other options) the resulting graph is: I---X---R---N Here, the merge commits R and N are included because they pulled the commits X and R into the base branch, respectively. These merges are the reason the commits A and B do not appear in the default history. When --show-pulls is paired with --simplify-merges, the graph includes all of the necessary information: .-A---M--. N / / \ / I B R \ / / \ / / `---X--' Notice that since M is reachable from R, the edge from N to M was simplified away. However, N still appears in the history as an important commit because it "pulled" the change R into the main branch. The --simplify-by-decoration option allows you to view only the big picture of the topology of the history, by omitting commits that are not referenced by tags. Commits are marked as !TREESAME (in other words, kept after history simplification rules described above) if (1) they are referenced by tags, or (2) they change the contents of the paths given on the command line. All other commits are marked as TREESAME (subject to be simplified away). Bisection Helpers --bisect Limit output to the one commit object which is roughly halfway between included and excluded commits. Note that the bad bisection ref refs/bisect/bad is added to the included commits (if it exists) and the good bisection refs refs/bisect/good-* are added to the excluded commits (if they exist). Thus, supposing there are no refs in refs/bisect/, if $ git rev-list --bisect foo ^bar ^baz outputs midpoint, the output of the two commands $ git rev-list foo ^midpoint $ git rev-list midpoint ^bar ^baz would be of roughly the same length. Finding the change which introduces a regression is thus reduced to a binary search: repeatedly generate and test new 'midpoint’s until the commit chain is of length one. --bisect-vars This calculates the same as --bisect, except that refs in refs/bisect/ are not used, and except that this outputs text ready to be eval’ed by the shell. These lines will assign the name of the midpoint revision to the variable bisect_rev, and the expected number of commits to be tested after bisect_rev is tested to bisect_nr, the expected number of commits to be tested if bisect_rev turns out to be good to bisect_good, the expected number of commits to be tested if bisect_rev turns out to be bad to bisect_bad, and the number of commits we are bisecting right now to bisect_all. --bisect-all This outputs all the commit objects between the included and excluded commits, ordered by their distance to the included and excluded commits. Refs in refs/bisect/ are not used. The farthest from them is displayed first. (This is the only one displayed by --bisect.) This is useful because it makes it easy to choose a good commit to test when you want to avoid to test some of them for some reason (they may not compile for example). This option can be used along with --bisect-vars, in this case, after all the sorted commit objects, there will be the same text as if --bisect-vars had been used alone. Commit Ordering By default, the commits are shown in reverse chronological order. --date-order Show no parents before all of its children are shown, but otherwise show commits in the commit timestamp order. --author-date-order Show no parents before all of its children are shown, but otherwise show commits in the author timestamp order. --topo-order Show no parents before all of its children are shown, and avoid showing commits on multiple lines of history intermixed. For example, in a commit history like this: ---1----2----4----7 \ \ 3----5----6----8--- where the numbers denote the order of commit timestamps, git rev-list and friends with --date-order show the commits in the timestamp order: 8 7 6 5 4 3 2 1. With --topo-order, they would show 8 6 5 3 7 4 2 1 (or 8 7 4 2 6 5 3 1); some older commits are shown before newer ones in order to avoid showing the commits from two parallel development track mixed together. --reverse Output the commits chosen to be shown (see Commit Limiting section above) in reverse order. Cannot be combined with --walk-reflogs. Object Traversal These options are mostly targeted for packing of Git repositories. --objects Print the object IDs of any object referenced by the listed commits. --objects foo ^bar thus means “send me all object IDs which I need to download if I have the commit object bar but not foo”. See also --object-names below. --in-commit-order Print tree and blob ids in order of the commits. The tree and blob ids are printed after they are first referenced by a commit. --objects-edge Similar to --objects, but also print the IDs of excluded commits prefixed with a “-” character. This is used by git-pack-objects(1) to build a “thin” pack, which records objects in deltified form based on objects contained in these excluded commits to reduce network traffic. --objects-edge-aggressive Similar to --objects-edge, but it tries harder to find excluded commits at the cost of increased time. This is used instead of --objects-edge to build “thin” packs for shallow repositories. --indexed-objects Pretend as if all trees and blobs used by the index are listed on the command line. Note that you probably want to use --objects, too. --unpacked Only useful with --objects; print the object IDs that are not in packs. --object-names Only useful with --objects; print the names of the object IDs that are found. This is the default behavior. Note that the "name" of each object is ambiguous, and mostly intended as a hint for packing objects. In particular: no distinction is made between the names of tags, trees, and blobs; path names may be modified to remove newlines; and if an object would appear multiple times with different names, only one name is shown. --no-object-names Only useful with --objects; does not print the names of the object IDs that are found. This inverts --object-names. This flag allows the output to be more easily parsed by commands such as git-cat-file(1). --filter=<filter-spec> Only useful with one of the --objects*; omits objects (usually blobs) from the list of printed objects. The <filter-spec> may be one of the following: The form --filter=blob:none omits all blobs. The form --filter=blob:limit=<n>[kmg] omits blobs larger than n bytes or units. n may be zero. The suffixes k, m, and g can be used to name units in KiB, MiB, or GiB. For example, blob:limit=1k is the same as blob:limit=1024. The form --filter=object:type=(tag|commit|tree|blob) omits all objects which are not of the requested type. The form --filter=sparse:oid=<blob-ish> uses a sparse-checkout specification contained in the blob (or blob-expression) <blob-ish> to omit blobs that would not be required for a sparse checkout on the requested refs. The form --filter=tree:<depth> omits all blobs and trees whose depth from the root tree is >= <depth> (minimum depth if an object is located at multiple depths in the commits traversed). <depth>=0 will not include any trees or blobs unless included explicitly in the command-line (or standard input when --stdin is used). <depth>=1 will include only the tree and blobs which are referenced directly by a commit reachable from <commit> or an explicitly-given object. <depth>=2 is like <depth>=1 while also including trees and blobs one more level removed from an explicitly-given commit or tree. Note that the form --filter=sparse:path=<path> that wants to read from an arbitrary path on the filesystem has been dropped for security reasons. Multiple --filter= flags can be specified to combine filters. Only objects which are accepted by every filter are included. The form --filter=combine:<filter1>+<filter2>+...<filterN> can also be used to combined several filters, but this is harder than just repeating the --filter flag and is usually not necessary. Filters are joined by + and individual filters are %-encoded (i.e. URL-encoded). Besides the + and % characters, the following characters are reserved and also must be encoded: ~!@#$^&*()[]{}\;",<>?'` as well as all characters with ASCII code <= 0x20, which includes space and newline. Other arbitrary characters can also be encoded. For instance, combine:tree:3+blob:none and combine:tree%3A3+blob%3Anone are equivalent. --no-filter Turn off any previous --filter= argument. --filter-provided-objects Filter the list of explicitly provided objects, which would otherwise always be printed even if they did not match any of the filters. Only useful with --filter=. --filter-print-omitted Only useful with --filter=; prints a list of the objects omitted by the filter. Object IDs are prefixed with a “~” character. --missing=<missing-action> A debug option to help with future "partial clone" development. This option specifies how missing objects are handled. The form --missing=error requests that rev-list stop with an error if a missing object is encountered. This is the default action. The form --missing=allow-any will allow object traversal to continue if a missing object is encountered. Missing objects will silently be omitted from the results. The form --missing=allow-promisor is like allow-any, but will only allow object traversal to continue for EXPECTED promisor missing objects. Unexpected missing objects will raise an error. The form --missing=print is like allow-any, but will also print a list of the missing objects. Object IDs are prefixed with a “?” character. --exclude-promisor-objects (For internal use only.) Prefilter object traversal at promisor boundary. This is used with partial clone. This is stronger than --missing=allow-promisor because it limits the traversal, rather than just silencing errors about missing objects. --no-walk[=(sorted|unsorted)] Only show the given commits, but do not traverse their ancestors. This has no effect if a range is specified. If the argument unsorted is given, the commits are shown in the order they were given on the command line. Otherwise (if sorted or no argument was given), the commits are shown in reverse chronological order by commit time. Cannot be combined with --graph. --do-walk Overrides a previous --no-walk. Commit Formatting Using these options, git-rev-list(1) will act similar to the more specialized family of commit log tools: git-log(1), git-show(1), and git-whatchanged(1) --pretty[=<format>], --format=<format> Pretty-print the contents of the commit logs in a given format, where <format> can be one of oneline, short, medium, full, fuller, reference, email, raw, format:<string> and tformat:<string>. When <format> is none of the above, and has %placeholder in it, it acts as if --pretty=tformat:<format> were given. See the "PRETTY FORMATS" section for some additional details for each format. When =<format> part is omitted, it defaults to medium. Note: you can specify the default pretty format in the repository configuration (see git-config(1)). --abbrev-commit Instead of showing the full 40-byte hexadecimal commit object name, show a prefix that names the object uniquely. "--abbrev=<n>" (which also modifies diff output, if it is displayed) option can be used to specify the minimum length of the prefix. This should make "--pretty=oneline" a whole lot more readable for people using 80-column terminals. --no-abbrev-commit Show the full 40-byte hexadecimal commit object name. This negates --abbrev-commit, either explicit or implied by other options such as "--oneline". It also overrides the log.abbrevCommit variable. --oneline This is a shorthand for "--pretty=oneline --abbrev-commit" used together. --encoding=<encoding> Commit objects record the character encoding used for the log message in their encoding header; this option can be used to tell the command to re-code the commit log message in the encoding preferred by the user. For non plumbing commands this defaults to UTF-8. Note that if an object claims to be encoded in X and we are outputting in X, we will output the object verbatim; this means that invalid sequences in the original commit may be copied to the output. Likewise, if iconv(3) fails to convert the commit, we will quietly output the original object verbatim. --expand-tabs=<n>, --expand-tabs, --no-expand-tabs Perform a tab expansion (replace each tab with enough spaces to fill to the next display column that is multiple of <n>) in the log message before showing it in the output. --expand-tabs is a short-hand for --expand-tabs=8, and --no-expand-tabs is a short-hand for --expand-tabs=0, which disables tab expansion. By default, tabs are expanded in pretty formats that indent the log message by 4 spaces (i.e. medium, which is the default, full, and fuller). --show-signature Check the validity of a signed commit object by passing the signature to gpg --verify and show the output. --relative-date Synonym for --date=relative. --date=<format> Only takes effect for dates shown in human-readable format, such as when using --pretty. log.date config variable sets a default value for the log command’s --date option. By default, dates are shown in the original time zone (either committer’s or author’s). If -local is appended to the format (e.g., iso-local), the user’s local time zone is used instead. --date=relative shows dates relative to the current time, e.g. “2 hours ago”. The -local option has no effect for --date=relative. --date=local is an alias for --date=default-local. --date=iso (or --date=iso8601) shows timestamps in a ISO 8601-like format. The differences to the strict ISO 8601 format are: • a space instead of the T date/time delimiter • a space between time and time zone • no colon between hours and minutes of the time zone --date=iso-strict (or --date=iso8601-strict) shows timestamps in strict ISO 8601 format. --date=rfc (or --date=rfc2822) shows timestamps in RFC 2822 format, often found in email messages. --date=short shows only the date, but not the time, in YYYY-MM-DD format. --date=raw shows the date as seconds since the epoch (1970-01-01 00:00:00 UTC), followed by a space, and then the timezone as an offset from UTC (a + or - with four digits; the first two are hours, and the second two are minutes). I.e., as if the timestamp were formatted with strftime("%s %z")). Note that the -local option does not affect the seconds-since-epoch value (which is always measured in UTC), but does switch the accompanying timezone value. --date=human shows the timezone if the timezone does not match the current time-zone, and doesn’t print the whole date if that matches (ie skip printing year for dates that are "this year", but also skip the whole date itself if it’s in the last few days and we can just say what weekday it was). For older dates the hour and minute is also omitted. --date=unix shows the date as a Unix epoch timestamp (seconds since 1970). As with --raw, this is always in UTC and therefore -local has no effect. --date=format:... feeds the format ... to your system strftime, except for %s, %z, and %Z, which are handled internally. Use --date=format:%c to show the date in your system locale’s preferred format. See the strftime manual for a complete list of format placeholders. When using -local, the correct syntax is --date=format-local:.... --date=default is the default format, and is based on ctime(3) output. It shows a single line with three-letter day of the week, three-letter month, day-of-month, hour-minute-seconds in "HH:MM:SS" format, followed by 4-digit year, plus timezone information, unless the local time zone is used, e.g. Thu Jan 1 00:00:00 1970 +0000. --header Print the contents of the commit in raw-format; each record is separated with a NUL character. --no-commit-header Suppress the header line containing "commit" and the object ID printed before the specified format. This has no effect on the built-in formats; only custom formats are affected. --commit-header Overrides a previous --no-commit-header. --parents Print also the parents of the commit (in the form "commit parent..."). Also enables parent rewriting, see History Simplification above. --children Print also the children of the commit (in the form "commit child..."). Also enables parent rewriting, see History Simplification above. --timestamp Print the raw commit timestamp. --left-right Mark which side of a symmetric difference a commit is reachable from. Commits from the left side are prefixed with < and those from the right with >. If combined with --boundary, those commits are prefixed with -. For example, if you have this topology: y---b---b branch B / \ / / . / / \ o---x---a---a branch A you would get an output like this: $ git rev-list --left-right --boundary --pretty=oneline A...B >bbbbbbb... 3rd on b >bbbbbbb... 2nd on b <aaaaaaa... 3rd on a <aaaaaaa... 2nd on a -yyyyyyy... 1st on b -xxxxxxx... 1st on a --graph Draw a text-based graphical representation of the commit history on the left hand side of the output. This may cause extra lines to be printed in between commits, in order for the graph history to be drawn properly. Cannot be combined with --no-walk. This enables parent rewriting, see History Simplification above. This implies the --topo-order option by default, but the --date-order option may also be specified. --show-linear-break[=<barrier>] When --graph is not used, all history branches are flattened which can make it hard to see that the two consecutive commits do not belong to a linear branch. This option puts a barrier in between them in that case. If <barrier> is specified, it is the string that will be shown instead of the default one. --count Print a number stating how many commits would have been listed, and suppress all other output. When used together with --left-right, instead print the counts for left and right commits, separated by a tab. When used together with --cherry-mark, omit patch equivalent commits from these counts and print the count for equivalent commits separated by a tab. | # git rev-list
> List revisions (commits) in reverse chronological order. More information:
> https://git-scm.com/docs/git-rev-list.
* List all commits on the current branch:
`git rev-list {{HEAD}}`
* Print the latest commit that changed (add/edit/remove) a specific file on the current branch:
`git rev-list -n 1 HEAD -- {{path/to/file}}`
* List commits more recent than a specific date, on a specific branch:
`git rev-list --since={{'2019-12-01 00:00:00'}} {{branch_name}}`
* List all merge commits on a specific commit:
`git rev-list --merges {{commit}}`
* Print the number of commits since a specific tag:
`git rev-list {{tag_name}}..HEAD --count` |
lpr | lpr submits files for printing. Files named on the command line are sent to the named printer or the default destination if no destination is specified. If no files are listed on the command- line, lpr reads the print file from the standard input. THE DEFAULT DESTINATION CUPS provides many ways to set the default destination. The LPDEST and PRINTER environment variables are consulted first. If neither are set, the current default set using the lpoptions(1) command is used, followed by the default set using the lpadmin(8) command. The following options are recognized by lpr: -E Forces encryption when connecting to the server. -H server[:port] Specifies an alternate server. -C "name" -J "name" -T "name" Sets the job name/title. -P destination[/instance] Prints files to the named printer. -U username Specifies an alternate username. -# copies Sets the number of copies to print. -h Disables banner printing. This option is equivalent to -o job-sheets=none. -l Specifies that the print file is already formatted for the destination and should be sent without filtering. This option is equivalent to -o raw. -m Send an email on job completion. -o option[=value] Sets a job option. See "COMMON JOB OPTIONS" below. -p Specifies that the print file should be formatted with a shaded header with the date, time, job name, and page number. This option is equivalent to -o prettyprint and is only useful when printing text files. -q Hold job for printing. -r Specifies that the named print files should be deleted after submitting them. COMMON JOB OPTIONS Aside from the printer-specific options reported by the lpoptions(1) command, the following generic options are available: -o job-sheets=name Prints a cover page (banner) with the document. The "name" can be "classified", "confidential", "secret", "standard", "topsecret", or "unclassified". -o media=size Sets the page size to size. Most printers support at least the size names "a4", "letter", and "legal". -o number-up={2|4|6|9|16} Prints 2, 4, 6, 9, or 16 document (input) pages on each output page. -o orientation-requested=4 Prints the job in landscape (rotated 90 degrees counter- clockwise). -o orientation-requested=5 Prints the job in landscape (rotated 90 degrees clockwise). -o orientation-requested=6 Prints the job in reverse portrait (rotated 180 degrees). -o print-quality=3 -o print-quality=4 -o print-quality=5 Specifies the output quality - draft (3), normal (4), or best (5). -o sides=one-sided Prints on one side of the paper. -o sides=two-sided-long-edge Prints on both sides of the paper for portrait output. -o sides=two-sided-short-edge Prints on both sides of the paper for landscape output. | # lpr
> CUPS tool for printing files. See also: `lpstat` and `lpadmin`. More
> information: https://www.cups.org/doc/man-lpr.html.
* Print a file to the default printer:
`lpr {{path/to/file}}`
* Print 2 copies:
`lpr -# {{2}} {{path/to/file}}`
* Print to a named printer:
`lpr -P {{printer}} {{path/to/file}}`
* Print either a single page (e.g. 2) or a range of pages (e.g. 2–16):
`lpr -o page-ranges={{2|2-16}} {{path/to/file}}`
* Print double-sided either in portrait (long) or in landscape (short):
`lpr -o sides={{two-sided-long-edge|two-sided-short-edge}} {{path/to/file}}`
* Set page size (more options may be available depending on setup):
`lpr -o media={{a4|letter|legal}} {{path/to/file}}`
* Print multiple pages per sheet:
`lpr -o number-up={{2|4|6|9|16}} {{path/to/file}}` |
lp | lp submits files for printing or alters a pending job. Use a filename of "-" to force printing from the standard input. THE DEFAULT DESTINATION CUPS provides many ways to set the default destination. The LPDEST and PRINTER environment variables are consulted first. If neither are set, the current default set using the lpoptions(1) command is used, followed by the default set using the lpadmin(8) command. The following options are recognized by lp: -- Marks the end of options; use this to print a file whose name begins with a dash (-). -E Forces encryption when connecting to the server. -U username Specifies the username to use when connecting to the server. -c This option is provided for backwards-compatibility only. On systems that support it, this option forces the print file to be copied to the spool directory before printing. In CUPS, print files are always sent to the scheduler via IPP which has the same effect. -d destination Prints files to the named printer. -h hostname[:port] Chooses an alternate server. -i job-id Specifies an existing job to modify. -m Sends an email when the job is completed. -n copies Sets the number of copies to print. -o "name=value [ ... name=value ]" Sets one or more job options. See "COMMON JOB OPTIONS" below. -q priority Sets the job priority from 1 (lowest) to 100 (highest). The default priority is 50. -s Do not report the resulting job IDs (silent mode.) -t "name" Sets the job name. -H hh:mm -H hold -H immediate -H restart -H resume Specifies when the job should be printed. A value of immediate will print the file immediately, a value of hold will hold the job indefinitely, and a UTC time value (HH:MM) will hold the job until the specified UTC (not local) time. Use a value of resume with the -i option to resume a held job. Use a value of restart with the -i option to restart a completed job. -P page-list Specifies which pages to print in the document. The list can contain a list of numbers and ranges (#-#) separated by commas, e.g., "1,3-5,16". The page numbers refer to the output pages and not the document's original pages - options like "number-up" can affect the numbering of the pages. COMMON JOB OPTIONS Aside from the printer-specific options reported by the lpoptions(1) command, the following generic options are available: -o job-sheets=name Prints a cover page (banner) with the document. The "name" can be "classified", "confidential", "secret", "standard", "topsecret", or "unclassified". -o media=size Sets the page size to size. Most printers support at least the size names "a4", "letter", and "legal". -o number-up={2|4|6|9|16} Prints 2, 4, 6, 9, or 16 document (input) pages on each output page. -o orientation-requested=4 Prints the job in landscape (rotated 90 degrees counter- clockwise). -o orientation-requested=5 Prints the job in landscape (rotated 90 degrees clockwise). -o orientation-requested=6 Prints the job in reverse portrait (rotated 180 degrees). -o print-quality=3 -o print-quality=4 -o print-quality=5 Specifies the output quality - draft (3), normal (4), or best (5). -o sides=one-sided Prints on one side of the paper. -o sides=two-sided-long-edge Prints on both sides of the paper for portrait output. -o sides=two-sided-short-edge Prints on both sides of the paper for landscape output. | # lp
> Print files. More information: https://manned.org/lp.
* Print the output of a command to the default printer (see `lpstat` command):
`echo "test" | lp`
* Print a file to the default printer:
`lp {{path/to/filename}}`
* Print a file to a named printer (see `lpstat` command):
`lp -d {{printer_name}} {{path/to/filename}}`
* Print N copies of file to default printer (replace N with desired number of copies):
`lp -n {{N}} {{path/to/filename}}`
* Print only certain pages to the default printer (print pages 1, 3-5, and 16):
`lp -P 1,3-5,16 {{path/to/filename}}`
* Resume printing a job:
`lp -i {{job_id}} -H resume` |
uptime | Print the current time, the length of time the system has been up, the number of users on the system, and the average number of jobs in the run queue over the last 1, 5 and 15 minutes. Processes in an uninterruptible sleep state also contribute to the load average. If FILE is not specified, use /var/run/utmp. /var/log/wtmp as FILE is common. --help display this help and exit --version output version information and exit | # uptime
> Tell how long the system has been running and other information. More
> information: https://ss64.com/osx/uptime.html.
* Print current time, uptime, number of logged-in users and other information:
`uptime` |
git-count-objects | This counts the number of unpacked object files and disk space consumed by them, to help you decide when it is a good time to repack. -v, --verbose Report in more detail: count: the number of loose objects size: disk space consumed by loose objects, in KiB (unless -H is specified) in-pack: the number of in-pack objects size-pack: disk space consumed by the packs, in KiB (unless -H is specified) prune-packable: the number of loose objects that are also present in the packs. These objects could be pruned using git prune-packed. garbage: the number of files in object database that are neither valid loose objects nor valid packs size-garbage: disk space consumed by garbage files, in KiB (unless -H is specified) alternate: absolute path of alternate object databases; may appear multiple times, one line per path. Note that if the path contains non-printable characters, it may be surrounded by double-quotes and contain C-style backslashed escape sequences. -H, --human-readable Print sizes in human readable format | # git count-objects
> Count the number of unpacked objects and their disk consumption. More
> information: https://git-scm.com/docs/git-count-objects.
* Count all objects and display the total disk usage:
`git count-objects`
* Display a count of all objects and their total disk usage, displaying sizes in human-readable units:
`git count-objects --human-readable`
* Display more verbose information:
`git count-objects --verbose`
* Display more verbose information, displaying sizes in human-readable units:
`git count-objects --human-readable --verbose` |
git-shortlog | Summarizes git log output in a format suitable for inclusion in release announcements. Each commit will be grouped by author and title. Additionally, "[PATCH]" will be stripped from the commit description. If no revisions are passed on the command line and either standard input is not a terminal or there is no current branch, git shortlog will output a summary of the log read from standard input, without reference to the current repository. -n, --numbered Sort output according to the number of commits per author instead of author alphabetic order. -s, --summary Suppress commit description and provide a commit count summary only. -e, --email Show the email address of each author. --format[=<format>] Instead of the commit subject, use some other information to describe each commit. <format> can be any string accepted by the --format option of git log, such as * [%h] %s. (See the "PRETTY FORMATS" section of git-log(1).) Each pretty-printed commit will be rewrapped before it is shown. --date=<format> Show dates formatted according to the given date string. (See the --date option in the "Commit Formatting" section of git-log(1)). Useful with --group=format:<format>. --group=<type> Group commits based on <type>. If no --group option is specified, the default is author. <type> is one of: • author, commits are grouped by author • committer, commits are grouped by committer (the same as -c) • trailer:<field>, the <field> is interpreted as a case-insensitive commit message trailer (see git-interpret-trailers(1)). For example, if your project uses Reviewed-by trailers, you might want to see who has been reviewing with git shortlog -ns --group=trailer:reviewed-by. • format:<format>, any string accepted by the --format option of git log. (See the "PRETTY FORMATS" section of git-log(1).) Note that commits that do not include the trailer will not be counted. Likewise, commits with multiple trailers (e.g., multiple signoffs) may be counted more than once (but only once per unique trailer value in that commit). Shortlog will attempt to parse each trailer value as a name <email> identity. If successful, the mailmap is applied and the email is omitted unless the --email option is specified. If the value cannot be parsed as an identity, it will be taken literally and completely. If --group is specified multiple times, commits are counted under each value (but again, only once per unique value in that commit). For example, git shortlog --group=author --group=trailer:co-authored-by counts both authors and co-authors. -c, --committer This is an alias for --group=committer. -w[<width>[,<indent1>[,<indent2>]]] Linewrap the output by wrapping each line at width. The first line of each entry is indented by indent1 spaces, and the second and subsequent lines are indented by indent2 spaces. width, indent1, and indent2 default to 76, 6 and 9 respectively. If width is 0 (zero) then indent the lines of the output without wrapping them. <revision-range> Show only commits in the specified revision range. When no <revision-range> is specified, it defaults to HEAD (i.e. the whole history leading to the current commit). origin..HEAD specifies all the commits reachable from the current commit (i.e. HEAD), but not from origin. For a complete list of ways to spell <revision-range>, see the "Specifying Ranges" section of gitrevisions(7). [--] <path>... Consider only commits that are enough to explain how the files that match the specified paths came to be. Paths may need to be prefixed with -- to separate them from options or the revision range, when confusion arises. Commit Limiting Besides specifying a range of commits that should be listed using the special notations explained in the description, additional commit limiting may be applied. Using more options generally further limits the output (e.g. --since=<date1> limits to commits newer than <date1>, and using it with --grep=<pattern> further limits to commits whose log message has a line that matches <pattern>), unless otherwise noted. Note that these are applied before commit ordering and formatting options, such as --reverse. -<number>, -n <number>, --max-count=<number> Limit the number of commits to output. --skip=<number> Skip number commits before starting to show the commit output. --since=<date>, --after=<date> Show commits more recent than a specific date. --since-as-filter=<date> Show all commits more recent than a specific date. This visits all commits in the range, rather than stopping at the first commit which is older than a specific date. --until=<date>, --before=<date> Show commits older than a specific date. --author=<pattern>, --committer=<pattern> Limit the commits output to ones with author/committer header lines that match the specified pattern (regular expression). With more than one --author=<pattern>, commits whose author matches any of the given patterns are chosen (similarly for multiple --committer=<pattern>). --grep-reflog=<pattern> Limit the commits output to ones with reflog entries that match the specified pattern (regular expression). With more than one --grep-reflog, commits whose reflog message matches any of the given patterns are chosen. It is an error to use this option unless --walk-reflogs is in use. --grep=<pattern> Limit the commits output to ones with log message that matches the specified pattern (regular expression). With more than one --grep=<pattern>, commits whose message matches any of the given patterns are chosen (but see --all-match). When --notes is in effect, the message from the notes is matched as if it were part of the log message. --all-match Limit the commits output to ones that match all given --grep, instead of ones that match at least one. --invert-grep Limit the commits output to ones with log message that do not match the pattern specified with --grep=<pattern>. -i, --regexp-ignore-case Match the regular expression limiting patterns without regard to letter case. --basic-regexp Consider the limiting patterns to be basic regular expressions; this is the default. -E, --extended-regexp Consider the limiting patterns to be extended regular expressions instead of the default basic regular expressions. -F, --fixed-strings Consider the limiting patterns to be fixed strings (don’t interpret pattern as a regular expression). -P, --perl-regexp Consider the limiting patterns to be Perl-compatible regular expressions. Support for these types of regular expressions is an optional compile-time dependency. If Git wasn’t compiled with support for them providing this option will cause it to die. --remove-empty Stop when a given path disappears from the tree. --merges Print only merge commits. This is exactly the same as --min-parents=2. --no-merges Do not print commits with more than one parent. This is exactly the same as --max-parents=1. --min-parents=<number>, --max-parents=<number>, --no-min-parents, --no-max-parents Show only commits which have at least (or at most) that many parent commits. In particular, --max-parents=1 is the same as --no-merges, --min-parents=2 is the same as --merges. --max-parents=0 gives all root commits and --min-parents=3 all octopus merges. --no-min-parents and --no-max-parents reset these limits (to no limit) again. Equivalent forms are --min-parents=0 (any commit has 0 or more parents) and --max-parents=-1 (negative numbers denote no upper limit). --first-parent When finding commits to include, follow only the first parent commit upon seeing a merge commit. This option can give a better overview when viewing the evolution of a particular topic branch, because merges into a topic branch tend to be only about adjusting to updated upstream from time to time, and this option allows you to ignore the individual commits brought in to your history by such a merge. --exclude-first-parent-only When finding commits to exclude (with a ^), follow only the first parent commit upon seeing a merge commit. This can be used to find the set of changes in a topic branch from the point where it diverged from the remote branch, given that arbitrary merges can be valid topic branch changes. --not Reverses the meaning of the ^ prefix (or lack thereof) for all following revision specifiers, up to the next --not. --all Pretend as if all the refs in refs/, along with HEAD, are listed on the command line as <commit>. --branches[=<pattern>] Pretend as if all the refs in refs/heads are listed on the command line as <commit>. If <pattern> is given, limit branches to ones matching given shell glob. If pattern lacks ?, *, or [, /* at the end is implied. --tags[=<pattern>] Pretend as if all the refs in refs/tags are listed on the command line as <commit>. If <pattern> is given, limit tags to ones matching given shell glob. If pattern lacks ?, *, or [, /* at the end is implied. --remotes[=<pattern>] Pretend as if all the refs in refs/remotes are listed on the command line as <commit>. If <pattern> is given, limit remote-tracking branches to ones matching given shell glob. If pattern lacks ?, *, or [, /* at the end is implied. --glob=<glob-pattern> Pretend as if all the refs matching shell glob <glob-pattern> are listed on the command line as <commit>. Leading refs/, is automatically prepended if missing. If pattern lacks ?, *, or [, /* at the end is implied. --exclude=<glob-pattern> Do not include refs matching <glob-pattern> that the next --all, --branches, --tags, --remotes, or --glob would otherwise consider. Repetitions of this option accumulate exclusion patterns up to the next --all, --branches, --tags, --remotes, or --glob option (other options or arguments do not clear accumulated patterns). The patterns given should not begin with refs/heads, refs/tags, or refs/remotes when applied to --branches, --tags, or --remotes, respectively, and they must begin with refs/ when applied to --glob or --all. If a trailing /* is intended, it must be given explicitly. --exclude-hidden=[fetch|receive|uploadpack] Do not include refs that would be hidden by git-fetch, git-receive-pack or git-upload-pack by consulting the appropriate fetch.hideRefs, receive.hideRefs or uploadpack.hideRefs configuration along with transfer.hideRefs (see git-config(1)). This option affects the next pseudo-ref option --all or --glob and is cleared after processing them. --reflog Pretend as if all objects mentioned by reflogs are listed on the command line as <commit>. --alternate-refs Pretend as if all objects mentioned as ref tips of alternate repositories were listed on the command line. An alternate repository is any repository whose object directory is specified in objects/info/alternates. The set of included objects may be modified by core.alternateRefsCommand, etc. See git-config(1). --single-worktree By default, all working trees will be examined by the following options when there are more than one (see git-worktree(1)): --all, --reflog and --indexed-objects. This option forces them to examine the current working tree only. --ignore-missing Upon seeing an invalid object name in the input, pretend as if the bad input was not given. --bisect Pretend as if the bad bisection ref refs/bisect/bad was listed and as if it was followed by --not and the good bisection refs refs/bisect/good-* on the command line. --stdin In addition to the <commit> listed on the command line, read them from the standard input. If a -- separator is seen, stop reading commits and start reading paths to limit the result. --cherry-mark Like --cherry-pick (see below) but mark equivalent commits with = rather than omitting them, and inequivalent ones with +. --cherry-pick Omit any commit that introduces the same change as another commit on the “other side” when the set of commits are limited with symmetric difference. For example, if you have two branches, A and B, a usual way to list all commits on only one side of them is with --left-right (see the example below in the description of the --left-right option). However, it shows the commits that were cherry-picked from the other branch (for example, “3rd on b” may be cherry-picked from branch A). With this option, such pairs of commits are excluded from the output. --left-only, --right-only List only commits on the respective side of a symmetric difference, i.e. only those which would be marked < resp. > by --left-right. For example, --cherry-pick --right-only A...B omits those commits from B which are in A or are patch-equivalent to a commit in A. In other words, this lists the + commits from git cherry A B. More precisely, --cherry-pick --right-only --no-merges gives the exact list. --cherry A synonym for --right-only --cherry-mark --no-merges; useful to limit the output to the commits on our side and mark those that have been applied to the other side of a forked history with git log --cherry upstream...mybranch, similar to git cherry upstream mybranch. -g, --walk-reflogs Instead of walking the commit ancestry chain, walk reflog entries from the most recent one to older ones. When this option is used you cannot specify commits to exclude (that is, ^commit, commit1..commit2, and commit1...commit2 notations cannot be used). With --pretty format other than oneline and reference (for obvious reasons), this causes the output to have two extra lines of information taken from the reflog. The reflog designator in the output may be shown as ref@{Nth} (where Nth is the reverse-chronological index in the reflog) or as ref@{timestamp} (with the timestamp for that entry), depending on a few rules: 1. If the starting point is specified as ref@{Nth}, show the index format. 2. If the starting point was specified as ref@{now}, show the timestamp format. 3. If neither was used, but --date was given on the command line, show the timestamp in the format requested by --date. 4. Otherwise, show the index format. Under --pretty=oneline, the commit message is prefixed with this information on the same line. This option cannot be combined with --reverse. See also git-reflog(1). Under --pretty=reference, this information will not be shown at all. --merge After a failed merge, show refs that touch files having a conflict and don’t exist on all heads to merge. --boundary Output excluded boundary commits. Boundary commits are prefixed with -. History Simplification Sometimes you are only interested in parts of the history, for example the commits modifying a particular <path>. But there are two parts of History Simplification, one part is selecting the commits and the other is how to do it, as there are various strategies to simplify the history. The following options select the commits to be shown: <paths> Commits modifying the given <paths> are selected. --simplify-by-decoration Commits that are referred by some branch or tag are selected. Note that extra commits can be shown to give a meaningful history. The following options affect the way the simplification is performed: Default mode Simplifies the history to the simplest history explaining the final state of the tree. Simplest because it prunes some side branches if the end result is the same (i.e. merging branches with the same content) --show-pulls Include all commits from the default mode, but also any merge commits that are not TREESAME to the first parent but are TREESAME to a later parent. This mode is helpful for showing the merge commits that "first introduced" a change to a branch. --full-history Same as the default mode, but does not prune some history. --dense Only the selected commits are shown, plus some to have a meaningful history. --sparse All commits in the simplified history are shown. --simplify-merges Additional option to --full-history to remove some needless merges from the resulting history, as there are no selected commits contributing to this merge. --ancestry-path[=<commit>] When given a range of commits to display (e.g. commit1..commit2 or commit2 ^commit1), only display commits in that range that are ancestors of <commit>, descendants of <commit>, or <commit> itself. If no commit is specified, use commit1 (the excluded part of the range) as <commit>. Can be passed multiple times; if so, a commit is included if it is any of the commits given or if it is an ancestor or descendant of one of them. A more detailed explanation follows. Suppose you specified foo as the <paths>. We shall call commits that modify foo !TREESAME, and the rest TREESAME. (In a diff filtered for foo, they look different and equal, respectively.) In the following, we will always refer to the same example history to illustrate the differences between simplification settings. We assume that you are filtering for a file foo in this commit graph: .-A---M---N---O---P---Q / / / / / / I B C D E Y \ / / / / / `-------------' X The horizontal line of history A---Q is taken to be the first parent of each merge. The commits are: • I is the initial commit, in which foo exists with contents “asdf”, and a file quux exists with contents “quux”. Initial commits are compared to an empty tree, so I is !TREESAME. • In A, foo contains just “foo”. • B contains the same change as A. Its merge M is trivial and hence TREESAME to all parents. • C does not change foo, but its merge N changes it to “foobar”, so it is not TREESAME to any parent. • D sets foo to “baz”. Its merge O combines the strings from N and D to “foobarbaz”; i.e., it is not TREESAME to any parent. • E changes quux to “xyzzy”, and its merge P combines the strings to “quux xyzzy”. P is TREESAME to O, but not to E. • X is an independent root commit that added a new file side, and Y modified it. Y is TREESAME to X. Its merge Q added side to P, and Q is TREESAME to P, but not to Y. rev-list walks backwards through history, including or excluding commits based on whether --full-history and/or parent rewriting (via --parents or --children) are used. The following settings are available. Default mode Commits are included if they are not TREESAME to any parent (though this can be changed, see --sparse below). If the commit was a merge, and it was TREESAME to one parent, follow only that parent. (Even if there are several TREESAME parents, follow only one of them.) Otherwise, follow all parents. This results in: .-A---N---O / / / I---------D Note how the rule to only follow the TREESAME parent, if one is available, removed B from consideration entirely. C was considered via N, but is TREESAME. Root commits are compared to an empty tree, so I is !TREESAME. Parent/child relations are only visible with --parents, but that does not affect the commits selected in default mode, so we have shown the parent lines. --full-history without parent rewriting This mode differs from the default in one point: always follow all parents of a merge, even if it is TREESAME to one of them. Even if more than one side of the merge has commits that are included, this does not imply that the merge itself is! In the example, we get I A B N D O P Q M was excluded because it is TREESAME to both parents. E, C and B were all walked, but only B was !TREESAME, so the others do not appear. Note that without parent rewriting, it is not really possible to talk about the parent/child relationships between the commits, so we show them disconnected. --full-history with parent rewriting Ordinary commits are only included if they are !TREESAME (though this can be changed, see --sparse below). Merges are always included. However, their parent list is rewritten: Along each parent, prune away commits that are not included themselves. This results in .-A---M---N---O---P---Q / / / / / I B / D / \ / / / / `-------------' Compare to --full-history without rewriting above. Note that E was pruned away because it is TREESAME, but the parent list of P was rewritten to contain E's parent I. The same happened for C and N, and X, Y and Q. In addition to the above settings, you can change whether TREESAME affects inclusion: --dense Commits that are walked are included if they are not TREESAME to any parent. --sparse All commits that are walked are included. Note that without --full-history, this still simplifies merges: if one of the parents is TREESAME, we follow only that one, so the other sides of the merge are never walked. --simplify-merges First, build a history graph in the same way that --full-history with parent rewriting does (see above). Then simplify each commit C to its replacement C' in the final history according to the following rules: • Set C' to C. • Replace each parent P of C' with its simplification P'. In the process, drop parents that are ancestors of other parents or that are root commits TREESAME to an empty tree, and remove duplicates, but take care to never drop all parents that we are TREESAME to. • If after this parent rewriting, C' is a root or merge commit (has zero or >1 parents), a boundary commit, or !TREESAME, it remains. Otherwise, it is replaced with its only parent. The effect of this is best shown by way of comparing to --full-history with parent rewriting. The example turns into: .-A---M---N---O / / / I B D \ / / `---------' Note the major differences in N, P, and Q over --full-history: • N's parent list had I removed, because it is an ancestor of the other parent M. Still, N remained because it is !TREESAME. • P's parent list similarly had I removed. P was then removed completely, because it had one parent and is TREESAME. • Q's parent list had Y simplified to X. X was then removed, because it was a TREESAME root. Q was then removed completely, because it had one parent and is TREESAME. There is another simplification mode available: --ancestry-path[=<commit>] Limit the displayed commits to those which are an ancestor of <commit>, or which are a descendant of <commit>, or are <commit> itself. As an example use case, consider the following commit history: D---E-------F / \ \ B---C---G---H---I---J / \ A-------K---------------L--M A regular D..M computes the set of commits that are ancestors of M, but excludes the ones that are ancestors of D. This is useful to see what happened to the history leading to M since D, in the sense that “what does M have that did not exist in D”. The result in this example would be all the commits, except A and B (and D itself, of course). When we want to find out what commits in M are contaminated with the bug introduced by D and need fixing, however, we might want to view only the subset of D..M that are actually descendants of D, i.e. excluding C and K. This is exactly what the --ancestry-path option does. Applied to the D..M range, it results in: E-------F \ \ G---H---I---J \ L--M We can also use --ancestry-path=D instead of --ancestry-path which means the same thing when applied to the D..M range but is just more explicit. If we instead are interested in a given topic within this range, and all commits affected by that topic, we may only want to view the subset of D..M which contain that topic in their ancestry path. So, using --ancestry-path=H D..M for example would result in: E \ G---H---I---J \ L--M Whereas --ancestry-path=K D..M would result in K---------------L--M Before discussing another option, --show-pulls, we need to create a new example history. A common problem users face when looking at simplified history is that a commit they know changed a file somehow does not appear in the file’s simplified history. Let’s demonstrate a new example and show how options such as --full-history and --simplify-merges works in that case: .-A---M-----C--N---O---P / / \ \ \/ / / I B \ R-'`-Z' / \ / \/ / \ / /\ / `---X--' `---Y--' For this example, suppose I created file.txt which was modified by A, B, and X in different ways. The single-parent commits C, Z, and Y do not change file.txt. The merge commit M was created by resolving the merge conflict to include both changes from A and B and hence is not TREESAME to either. The merge commit R, however, was created by ignoring the contents of file.txt at M and taking only the contents of file.txt at X. Hence, R is TREESAME to X but not M. Finally, the natural merge resolution to create N is to take the contents of file.txt at R, so N is TREESAME to R but not C. The merge commits O and P are TREESAME to their first parents, but not to their second parents, Z and Y respectively. When using the default mode, N and R both have a TREESAME parent, so those edges are walked and the others are ignored. The resulting history graph is: I---X When using --full-history, Git walks every edge. This will discover the commits A and B and the merge M, but also will reveal the merge commits O and P. With parent rewriting, the resulting graph is: .-A---M--------N---O---P / / \ \ \/ / / I B \ R-'`--' / \ / \/ / \ / /\ / `---X--' `------' Here, the merge commits O and P contribute extra noise, as they did not actually contribute a change to file.txt. They only merged a topic that was based on an older version of file.txt. This is a common issue in repositories using a workflow where many contributors work in parallel and merge their topic branches along a single trunk: many unrelated merges appear in the --full-history results. When using the --simplify-merges option, the commits O and P disappear from the results. This is because the rewritten second parents of O and P are reachable from their first parents. Those edges are removed and then the commits look like single-parent commits that are TREESAME to their parent. This also happens to the commit N, resulting in a history view as follows: .-A---M--. / / \ I B R \ / / \ / / `---X--' In this view, we see all of the important single-parent changes from A, B, and X. We also see the carefully-resolved merge M and the not-so-carefully-resolved merge R. This is usually enough information to determine why the commits A and B "disappeared" from history in the default view. However, there are a few issues with this approach. The first issue is performance. Unlike any previous option, the --simplify-merges option requires walking the entire commit history before returning a single result. This can make the option difficult to use for very large repositories. The second issue is one of auditing. When many contributors are working on the same repository, it is important which merge commits introduced a change into an important branch. The problematic merge R above is not likely to be the merge commit that was used to merge into an important branch. Instead, the merge N was used to merge R and X into the important branch. This commit may have information about why the change X came to override the changes from A and B in its commit message. --show-pulls In addition to the commits shown in the default history, show each merge commit that is not TREESAME to its first parent but is TREESAME to a later parent. When a merge commit is included by --show-pulls, the merge is treated as if it "pulled" the change from another branch. When using --show-pulls on this example (and no other options) the resulting graph is: I---X---R---N Here, the merge commits R and N are included because they pulled the commits X and R into the base branch, respectively. These merges are the reason the commits A and B do not appear in the default history. When --show-pulls is paired with --simplify-merges, the graph includes all of the necessary information: .-A---M--. N / / \ / I B R \ / / \ / / `---X--' Notice that since M is reachable from R, the edge from N to M was simplified away. However, N still appears in the history as an important commit because it "pulled" the change R into the main branch. The --simplify-by-decoration option allows you to view only the big picture of the topology of the history, by omitting commits that are not referenced by tags. Commits are marked as !TREESAME (in other words, kept after history simplification rules described above) if (1) they are referenced by tags, or (2) they change the contents of the paths given on the command line. All other commits are marked as TREESAME (subject to be simplified away). | # git shortlog
> Summarizes the `git log` output. More information: https://git-
> scm.com/docs/git-shortlog.
* View a summary of all the commits made, grouped alphabetically by author name:
`git shortlog`
* View a summary of all the commits made, sorted by the number of commits made:
`git shortlog -n`
* View a summary of all the commits made, grouped by the committer identities (name and email):
`git shortlog -c`
* View a summary of the last 5 commits (i.e. specify a revision range):
`git shortlog HEAD~{{5}}..HEAD`
* View all users, emails and the number of commits in the current branch:
`git shortlog -sne`
* View all users, emails and the number of commits in all branches:
`git shortlog -sne --all` |
pv | pv shows the progress of data through a pipeline by giving information such as time elapsed, percentage completed (with progress bar), current throughput rate, total data transferred, and ETA. To use it, insert it in a pipeline between two processes, with the appropriate options. Its standard input will be passed through to its standard output and progress will be shown on standard error. pv will copy each supplied FILE in turn to standard output (- means standard input), or if no FILEs are specified just standard input is copied. This is the same behaviour as cat(1). A simple example to watch how quickly a file is transferred using nc(1): pv file | nc -w 1 somewhere.com 3000 A similar example, transferring a file from another process and passing the expected size to pv: cat file | pv -s 12345 | nc -w 1 somewhere.com 3000 A more complicated example using numeric output to feed into the dialog(1) program for a full-screen progress display: (tar cf - . \ | pv -n -s $(du -sb . | awk '{print $1}') \ | gzip -9 > out.tgz) 2>&1 \ | dialog --gauge 'Progress' 7 70 Taking an image of a disk, skipping errors: pv -EE /dev/your/disk/device > disk-image.img Writing an image back to a disk: pv disk-image.img > /dev/your/disk/device Zeroing a disk: pv < /dev/zero > /dev/your/disk/device Note that if the input size cannot be calculated, and the output is a block device, then the size of the block device will be used and pv will automatically stop at that size as if -S had been given. (Linux only): Watching file descriptor 3 opened by another process 1234: pv -d 1234:3 (Linux only): Watching all file descriptors used by process 1234: pv -d 1234 pv takes many options, which are divided into display switches, output modifiers, and general options. | # pv
> Monitor the progress of data through a pipe. More information:
> https://manned.org/pv.
* Print the contents of the file and display a progress bar:
`pv {{path/to/file}}`
* Measure the speed and amount of data flow between pipes (`--size` is optional):
`command1 | pv --size {{expected_amount_of_data_for_eta}} | command2`
* Filter a file, see both progress and amount of output data:
`pv -cN in {{big_text_file}} | grep {{pattern}} | pv -cN out >
{{filtered_file}}`
* Attach to an already running process and see its file reading progress:
`pv -d {{PID}}`
* Read an erroneous file, skip errors as `dd conv=sync,noerror` would:
`pv -EE {{path/to/faulty_media}} > image.img`
* Stop reading after reading specified amount of data, rate limit to 1K/s:
`pv -L 1K --stop-at --size {{maximum_file_size_to_be_read}}` |
nl | Write each FILE to standard output, with line numbers added. With no FILE, or when FILE is -, read standard input. Mandatory arguments to long options are mandatory for short options too. -b, --body-numbering=STYLE use STYLE for numbering body lines -d, --section-delimiter=CC use CC for logical page delimiters -f, --footer-numbering=STYLE use STYLE for numbering footer lines -h, --header-numbering=STYLE use STYLE for numbering header lines -i, --line-increment=NUMBER line number increment at each line -l, --join-blank-lines=NUMBER group of NUMBER empty lines counted as one -n, --number-format=FORMAT insert line numbers according to FORMAT -p, --no-renumber do not reset line numbers for each section -s, --number-separator=STRING add STRING after (possible) line number -v, --starting-line-number=NUMBER first line number for each section -w, --number-width=NUMBER use NUMBER columns for line numbers --help display this help and exit --version output version information and exit Default options are: -bt -d'\:' -fn -hn -i1 -l1 -n'rn' -s<TAB> -v1 -w6 CC are two delimiter characters used to construct logical page delimiters; a missing second character implies ':'. As a GNU extension one can specify more than two characters, and also specifying the empty string (-d '') disables section matching. STYLE is one of: a number all lines t number only nonempty lines n number no lines pBRE number only lines that contain a match for the basic regular expression, BRE FORMAT is one of: ln left justified, no leading zeros rn right justified, no leading zeros rz right justified, leading zeros | # nl
> A utility for numbering lines, either from a file, or from `stdin`. More
> information: https://www.gnu.org/software/coreutils/nl.
* Number non-blank lines in a file:
`nl {{path/to/file}}`
* Read from `stdout`:
`cat {{path/to/file}} | nl {{options}} -`
* Number only the lines with printable text:
`nl -t {{path/to/file}}`
* Number all lines including blank lines:
`nl -b a {{path/to/file}}`
* Number only the body lines that match a basic regular expression (BRE) pattern:
`nl -b p'FooBar[0-9]' {{path/to/file}}` |
git-svn | git svn is a simple conduit for changesets between Subversion and Git. It provides a bidirectional flow of changes between a Subversion and a Git repository. git svn can track a standard Subversion repository, following the common "trunk/branches/tags" layout, with the --stdlayout option. It can also follow branches and tags in any layout with the -T/-t/-b options (see options to init below, and also the clone command). Once tracking a Subversion repository (with any of the above methods), the Git repository can be updated from Subversion by the fetch command and Subversion updated from Git by the dcommit command. --shared[=(false|true|umask|group|all|world|everybody)], --template=<template-directory> Only used with the init command. These are passed directly to git init. -r <arg>, --revision <arg> Used with the fetch command. This allows revision ranges for partial/cauterized history to be supported. $NUMBER, $NUMBER1:$NUMBER2 (numeric ranges), $NUMBER:HEAD, and BASE:$NUMBER are all supported. This can allow you to make partial mirrors when running fetch; but is generally not recommended because history will be skipped and lost. -, --stdin Only used with the set-tree command. Read a list of commits from stdin and commit them in reverse order. Only the leading sha1 is read from each line, so git rev-list --pretty=oneline output can be used. --rmdir Only used with the dcommit, set-tree and commit-diff commands. Remove directories from the SVN tree if there are no files left behind. SVN can version empty directories, and they are not removed by default if there are no files left in them. Git cannot version empty directories. Enabling this flag will make the commit to SVN act like Git. config key: svn.rmdir -e, --edit Only used with the dcommit, set-tree and commit-diff commands. Edit the commit message before committing to SVN. This is off by default for objects that are commits, and forced on when committing tree objects. config key: svn.edit -l<num>, --find-copies-harder Only used with the dcommit, set-tree and commit-diff commands. They are both passed directly to git diff-tree; see git-diff-tree(1) for more information. config key: svn.l config key: svn.findcopiesharder -A<filename>, --authors-file=<filename> Syntax is compatible with the file used by git cvsimport but an empty email address can be supplied with <>: loginname = Joe User <user@example.com> If this option is specified and git svn encounters an SVN committer name that does not exist in the authors-file, git svn will abort operation. The user will then have to add the appropriate entry. Re-running the previous git svn command after the authors-file is modified should continue operation. config key: svn.authorsfile --authors-prog=<filename> If this option is specified, for each SVN committer name that does not exist in the authors file, the given file is executed with the committer name as the first argument. The program is expected to return a single line of the form "Name <email>" or "Name <>", which will be treated as if included in the authors file. Due to historical reasons a relative filename is first searched relative to the current directory for init and clone and relative to the root of the working tree for fetch. If filename is not found, it is searched like any other command in $PATH. config key: svn.authorsProg -q, --quiet Make git svn less verbose. Specify a second time to make it even less verbose. -m, --merge, -s<strategy>, --strategy=<strategy>, -p, --rebase-merges These are only used with the dcommit and rebase commands. Passed directly to git rebase when using dcommit if a git reset cannot be used (see dcommit). -n, --dry-run This can be used with the dcommit, rebase, branch and tag commands. For dcommit, print out the series of Git arguments that would show which diffs would be committed to SVN. For rebase, display the local branch associated with the upstream svn repository associated with the current branch and the URL of svn repository that will be fetched from. For branch and tag, display the urls that will be used for copying when creating the branch or tag. --use-log-author When retrieving svn commits into Git (as part of fetch, rebase, or dcommit operations), look for the first From: line or Signed-off-by trailer in the log message and use that as the author string. config key: svn.useLogAuthor --add-author-from When committing to svn from Git (as part of set-tree or dcommit operations), if the existing log message doesn’t already have a From: or Signed-off-by trailer, append a From: line based on the Git commit’s author string. If you use this, then --use-log-author will retrieve a valid author string for all commits. config key: svn.addAuthorFrom | # git svn
> Bidirectional operation between a Subversion repository and Git. More
> information: https://git-scm.com/docs/git-svn.
* Clone an SVN repository:
`git svn clone {{https://example.com/subversion_repo}} {{local_dir}}`
* Clone an SVN repository starting at a given revision number:
`git svn clone -r{{1234}}:HEAD {{https://svn.example.net/subversion/repo}}
{{local_dir}}`
* Update local clone from the remote SVN repository:
`git svn rebase`
* Fetch updates from the remote SVN repository without changing the Git HEAD:
`git svn fetch`
* Commit back to the SVN repository:
`git svn dcommit` |
grep | grep searches for PATTERNS in each FILE. PATTERNS is one or more patterns separated by newline characters, and grep prints each line that matches a pattern. Typically PATTERNS should be quoted when grep is used in a shell command. A FILE of “-” stands for standard input. If no FILE is given, recursive searches examine the working directory, and nonrecursive searches read standard input. Generic Program Information --help Output a usage message and exit. -V, --version Output the version number of grep and exit. Pattern Syntax -E, --extended-regexp Interpret PATTERNS as extended regular expressions (EREs, see below). -F, --fixed-strings Interpret PATTERNS as fixed strings, not regular expressions. -G, --basic-regexp Interpret PATTERNS as basic regular expressions (BREs, see below). This is the default. -P, --perl-regexp Interpret PATTERNS as Perl-compatible regular expressions (PCREs). This option is experimental when combined with the -z (--null-data) option, and grep -P may warn of unimplemented features. Matching Control -e PATTERNS, --regexp=PATTERNS Use PATTERNS as the patterns. If this option is used multiple times or is combined with the -f (--file) option, search for all patterns given. This option can be used to protect a pattern beginning with “-”. -f FILE, --file=FILE Obtain patterns from FILE, one per line. If this option is used multiple times or is combined with the -e (--regexp) option, search for all patterns given. The empty file contains zero patterns, and therefore matches nothing. If FILE is - , read patterns from standard input. -i, --ignore-case Ignore case distinctions in patterns and input data, so that characters that differ only in case match each other. --no-ignore-case Do not ignore case distinctions in patterns and input data. This is the default. This option is useful for passing to shell scripts that already use -i, to cancel its effects because the two options override each other. -v, --invert-match Invert the sense of matching, to select non-matching lines. -w, --word-regexp Select only those lines containing matches that form whole words. The test is that the matching substring must either be at the beginning of the line, or preceded by a non-word constituent character. Similarly, it must be either at the end of the line or followed by a non-word constituent character. Word-constituent characters are letters, digits, and the underscore. This option has no effect if -x is also specified. -x, --line-regexp Select only those matches that exactly match the whole line. For a regular expression pattern, this is like parenthesizing the pattern and then surrounding it with ^ and $. General Output Control -c, --count Suppress normal output; instead print a count of matching lines for each input file. With the -v, --invert-match option (see above), count non-matching lines. --color[=WHEN], --colour[=WHEN] Surround the matched (non-empty) strings, matching lines, context lines, file names, line numbers, byte offsets, and separators (for fields and groups of context lines) with escape sequences to display them in color on the terminal. The colors are defined by the environment variable GREP_COLORS. WHEN is never, always, or auto. -L, --files-without-match Suppress normal output; instead print the name of each input file from which no output would normally have been printed. -l, --files-with-matches Suppress normal output; instead print the name of each input file from which output would normally have been printed. Scanning each input file stops upon first match. -m NUM, --max-count=NUM Stop reading a file after NUM matching lines. If NUM is zero, grep stops right away without reading input. A NUM of -1 is treated as infinity and grep does not stop; this is the default. If the input is standard input from a regular file, and NUM matching lines are output, grep ensures that the standard input is positioned to just after the last matching line before exiting, regardless of the presence of trailing context lines. This enables a calling process to resume a search. When grep stops after NUM matching lines, it outputs any trailing context lines. When the -c or --count option is also used, grep does not output a count greater than NUM. When the -v or --invert-match option is also used, grep stops after outputting NUM non-matching lines. -o, --only-matching Print only the matched (non-empty) parts of a matching line, with each such part on a separate output line. -q, --quiet, --silent Quiet; do not write anything to standard output. Exit immediately with zero status if any match is found, even if an error was detected. Also see the -s or --no-messages option. -s, --no-messages Suppress error messages about nonexistent or unreadable files. Output Line Prefix Control -b, --byte-offset Print the 0-based byte offset within the input file before each line of output. If -o (--only-matching) is specified, print the offset of the matching part itself. -H, --with-filename Print the file name for each match. This is the default when there is more than one file to search. This is a GNU extension. -h, --no-filename Suppress the prefixing of file names on output. This is the default when there is only one file (or only standard input) to search. --label=LABEL Display input actually coming from standard input as input coming from file LABEL. This can be useful for commands that transform a file's contents before searching, e.g., gzip -cd foo.gz | grep --label=foo -H 'some pattern'. See also the -H option. -n, --line-number Prefix each line of output with the 1-based line number within its input file. -T, --initial-tab Make sure that the first character of actual line content lies on a tab stop, so that the alignment of tabs looks normal. This is useful with options that prefix their output to the actual content: -H,-n, and -b. In order to improve the probability that lines from a single file will all start at the same column, this also causes the line number and byte offset (if present) to be printed in a minimum size field width. -Z, --null Output a zero byte (the ASCII NUL character) instead of the character that normally follows a file name. For example, grep -lZ outputs a zero byte after each file name instead of the usual newline. This option makes the output unambiguous, even in the presence of file names containing unusual characters like newlines. This option can be used with commands like find -print0, perl -0, sort -z, and xargs -0 to process arbitrary file names, even those that contain newline characters. Context Line Control -A NUM, --after-context=NUM Print NUM lines of trailing context after matching lines. Places a line containing a group separator (--) between contiguous groups of matches. With the -o or --only-matching option, this has no effect and a warning is given. -B NUM, --before-context=NUM Print NUM lines of leading context before matching lines. Places a line containing a group separator (--) between contiguous groups of matches. With the -o or --only-matching option, this has no effect and a warning is given. -C NUM, -NUM, --context=NUM Print NUM lines of output context. Places a line containing a group separator (--) between contiguous groups of matches. With the -o or --only-matching option, this has no effect and a warning is given. --group-separator=SEP When -A, -B, or -C are in use, print SEP instead of -- between groups of lines. --no-group-separator When -A, -B, or -C are in use, do not print a separator between groups of lines. File and Directory Selection -a, --text Process a binary file as if it were text; this is equivalent to the --binary-files=text option. --binary-files=TYPE If a file's data or metadata indicate that the file contains binary data, assume that the file is of type TYPE. Non-text bytes indicate binary data; these are either output bytes that are improperly encoded for the current locale, or null input bytes when the -z option is not given. By default, TYPE is binary, and grep suppresses output after null input binary data is discovered, and suppresses output lines that contain improperly encoded data. When some output is suppressed, grep follows any output with a message to standard error saying that a binary file matches. If TYPE is without-match, when grep discovers null input binary data it assumes that the rest of the file does not match; this is equivalent to the -I option. If TYPE is text, grep processes a binary file as if it were text; this is equivalent to the -a option. When type is binary, grep may treat non-text bytes as line terminators even without the -z option. This means choosing binary versus text can affect whether a pattern matches a file. For example, when type is binary the pattern q$ might match q immediately followed by a null byte, even though this is not matched when type is text. Conversely, when type is binary the pattern . (period) might not match a null byte. Warning: The -a option might output binary garbage, which can have nasty side effects if the output is a terminal and if the terminal driver interprets some of it as commands. On the other hand, when reading files whose text encodings are unknown, it can be helpful to use -a or to set LC_ALL='C' in the environment, in order to find more matches even if the matches are unsafe for direct display. -D ACTION, --devices=ACTION If an input file is a device, FIFO or socket, use ACTION to process it. By default, ACTION is read, which means that devices are read just as if they were ordinary files. If ACTION is skip, devices are silently skipped. -d ACTION, --directories=ACTION If an input file is a directory, use ACTION to process it. By default, ACTION is read, i.e., read directories just as if they were ordinary files. If ACTION is skip, silently skip directories. If ACTION is recurse, read all files under each directory, recursively, following symbolic links only if they are on the command line. This is equivalent to the -r option. --exclude=GLOB Skip any command-line file with a name suffix that matches the pattern GLOB, using wildcard matching; a name suffix is either the whole name, or a trailing part that starts with a non-slash character immediately after a slash (/) in the name. When searching recursively, skip any subfile whose base name matches GLOB; the base name is the part after the last slash. A pattern can use *, ?, and [...] as wildcards, and \ to quote a wildcard or backslash character literally. --exclude-from=FILE Skip files whose base name matches any of the file-name globs read from FILE (using wildcard matching as described under --exclude). --exclude-dir=GLOB Skip any command-line directory with a name suffix that matches the pattern GLOB. When searching recursively, skip any subdirectory whose base name matches GLOB. Ignore any redundant trailing slashes in GLOB. -I Process a binary file as if it did not contain matching data; this is equivalent to the --binary-files=without-match option. --include=GLOB Search only files whose base name matches GLOB (using wildcard matching as described under --exclude). If contradictory --include and --exclude options are given, the last matching one wins. If no --include or --exclude options match, a file is included unless the first such option is --include. -r, --recursive Read all files under each directory, recursively, following symbolic links only if they are on the command line. Note that if no file operand is given, grep searches the working directory. This is equivalent to the -d recurse option. -R, --dereference-recursive Read all files under each directory, recursively. Follow all symbolic links, unlike -r. Other Options --line-buffered Use line buffering on output. This can cause a performance penalty. -U, --binary Treat the file(s) as binary. By default, under MS-DOS and MS-Windows, grep guesses whether a file is text or binary as described for the --binary-files option. If grep decides the file is a text file, it strips the CR characters from the original file contents (to make regular expressions with ^ and $ work correctly). Specifying -U overrules this guesswork, causing all files to be read and passed to the matching mechanism verbatim; if the file is a text file with CR/LF pairs at the end of each line, this will cause some regular expressions to fail. This option has no effect on platforms other than MS-DOS and MS-Windows. -z, --null-data Treat input and output data as sequences of lines, each terminated by a zero byte (the ASCII NUL character) instead of a newline. Like the -Z or --null option, this option can be used with commands like sort -z to process arbitrary file names. | # grep
> Find patterns in files using regular expressions. More information:
> https://www.gnu.org/software/grep/manual/grep.html.
* Search for a pattern within a file:
`grep "{{search_pattern}}" {{path/to/file}}`
* Search for an exact string (disables regular expressions):
`grep --fixed-strings "{{exact_string}}" {{path/to/file}}`
* Search for a pattern in all files recursively in a directory, showing line numbers of matches, ignoring binary files:
`grep --recursive --line-number --binary-files={{without-match}}
"{{search_pattern}}" {{path/to/directory}}`
* Use extended regular expressions (supports `?`, `+`, `{}`, `()` and `|`), in case-insensitive mode:
`grep --extended-regexp --ignore-case "{{search_pattern}}" {{path/to/file}}`
* Print 3 lines of context around, before, or after each match:
`grep --{{context|before-context|after-context}}={{3}} "{{search_pattern}}"
{{path/to/file}}`
* Print file name and line number for each match with color output:
`grep --with-filename --line-number --color=always "{{search_pattern}}"
{{path/to/file}}`
* Search for lines matching a pattern, printing only the matched text:
`grep --only-matching "{{search_pattern}}" {{path/to/file}}`
* Search `stdin` for lines that do not match a pattern:
`cat {{path/to/file}} | grep --invert-match "{{search_pattern}}"` |
systemd-run | systemd-run may be used to create and start a transient .service or .scope unit and run the specified COMMAND in it. It may also be used to create and start a transient .path, .socket, or .timer unit, that activates a .service unit when elapsing. If a command is run as transient service unit, it will be started and managed by the service manager like any other service, and thus shows up in the output of systemctl list-units like any other unit. It will run in a clean and detached execution environment, with the service manager as its parent process. In this mode, systemd-run will start the service asynchronously in the background and return after the command has begun execution (unless --no-block or --wait are specified, see below). If a command is run as transient scope unit, it will be executed by systemd-run itself as parent process and will thus inherit the execution environment of the caller. However, the processes of the command are managed by the service manager similarly to normal services, and will show up in the output of systemctl list-units. Execution in this case is synchronous, and will return only when the command finishes. This mode is enabled via the --scope switch (see below). If a command is run with path, socket, or timer options such as --on-calendar= (see below), a transient path, socket, or timer unit is created alongside the service unit for the specified command. Only the transient path, socket, or timer unit is started immediately, the transient service unit will be triggered by the path, socket, or timer unit. If the --unit= option is specified, the COMMAND may be omitted. In this case, systemd-run creates only a .path, .socket, or .timer unit that triggers the specified unit. By default, services created with systemd-run default to the simple type, see the description of Type= in systemd.service(5) for details. Note that when this type is used, the service manager (and thus the systemd-run command) considers service start-up successful as soon as the fork() for the main service process succeeded, i.e. before the execve() is invoked, and thus even if the specified command cannot be started. Consider using the exec service type (i.e. --property=Type=exec) to ensure that systemd-run returns successfully only if the specified command line has been successfully started. After systemd-run passes the command to the service manager, the manager performs variable expansion. This means that dollar characters ("$") which should not be expanded need to be escaped as "$$". Expansion can also be disabled using --expand-environment=no. The following options are understood: --no-ask-password Do not query the user for authentication for privileged operations. --scope Create a transient .scope unit instead of the default transient .service unit (see above). --unit=, -u Use this unit name instead of an automatically generated one. --property=, -p Sets a property on the scope or service unit that is created. This option takes an assignment in the same format as systemctl(1)'s set-property command. --description= Provide a description for the service, scope, path, socket, or timer unit. If not specified, the command itself will be used as a description. See Description= in systemd.unit(5). --slice= Make the new .service or .scope unit part of the specified slice, instead of system.slice (when running in --system mode) or the root slice (when running in --user mode). --slice-inherit Make the new .service or .scope unit part of the inherited slice. This option can be combined with --slice=. An inherited slice is located within systemd-run slice. Example: if systemd-run slice is foo.slice, and the --slice= argument is bar, the unit will be placed under the foo-bar.slice. --expand-environment=BOOL Expand environment variables in command arguments. If enabled (the default), environment variables specified as "${VARIABLE}" will be expanded in the same way as in commands specified via ExecStart= in units. With --scope, this expansion is performed by systemd-run itself, and in other cases by the service manager that spawns the command. Note that this is similar to, but not the same as variable expansion in bash(1) and other shells. See systemd.service(5) for a description of variable expansion. Disabling variable expansion is useful if the specified command includes or may include a "$" sign. -r, --remain-after-exit After the service process has terminated, keep the service around until it is explicitly stopped. This is useful to collect runtime information about the service after it finished running. Also see RemainAfterExit= in systemd.service(5). --send-sighup When terminating the scope or service unit, send a SIGHUP immediately after SIGTERM. This is useful to indicate to shells and shell-like processes that the connection has been severed. Also see SendSIGHUP= in systemd.kill(5). --service-type= Sets the service type. Also see Type= in systemd.service(5). This option has no effect in conjunction with --scope. Defaults to simple. --uid=, --gid= Runs the service process under the specified UNIX user and group. Also see User= and Group= in systemd.exec(5). --nice= Runs the service process with the specified nice level. Also see Nice= in systemd.exec(5). --working-directory= Runs the service process with the specified working directory. Also see WorkingDirectory= in systemd.exec(5). --same-dir, -d Similar to --working-directory=, but uses the current working directory of the caller for the service to execute. -E NAME[=VALUE], --setenv=NAME[=VALUE] Runs the service process with the specified environment variable set. This parameter may be used more than once to set multiple variables. When "=" and VALUE are omitted, the value of the variable with the same name in the program environment will be used. Also see Environment= in systemd.exec(5). --pty, -t When invoking the command, the transient service connects its standard input, output and error to the terminal systemd-run is invoked on, via a pseudo TTY device. This allows running programs that expect interactive user input/output as services, such as interactive command shells. Note that machinectl(1)'s shell command is usually a better alternative for requesting a new, interactive login session on the local host or a local container. See below for details on how this switch combines with --pipe. --pipe, -P If specified, standard input, output, and error of the transient service are inherited from the systemd-run command itself. This allows systemd-run to be used within shell pipelines. Note that this mode is not suitable for interactive command shells and similar, as the service process will not become a TTY controller when invoked on a terminal. Use --pty instead in that case. When both --pipe and --pty are used in combination the more appropriate option is automatically determined and used. Specifically, when invoked with standard input, output and error connected to a TTY --pty is used, and otherwise --pipe. When this option is used the original file descriptors systemd-run receives are passed to the service processes as-is. If the service runs with different privileges than systemd-run, this means the service might not be able to re-open the passed file descriptors, due to normal file descriptor access restrictions. If the invoked process is a shell script that uses the echo "hello" >/dev/stderr construct for writing messages to stderr, this might cause problems, as this only works if stderr can be re-opened. To mitigate this use the construct echo "hello" >&2 instead, which is mostly equivalent and avoids this pitfall. --shell, -S A shortcut for "--pty --same-dir --wait --collect --service-type=exec $SHELL", i.e. requests an interactive shell in the current working directory, running in service context, accessible with a single switch. --quiet, -q Suppresses additional informational output while running. This is particularly useful in combination with --pty when it will suppress the initial message explaining how to terminate the TTY connection. --on-active=, --on-boot=, --on-startup=, --on-unit-active=, --on-unit-inactive= Defines a monotonic timer relative to different starting points for starting the specified command. See OnActiveSec=, OnBootSec=, OnStartupSec=, OnUnitActiveSec= and OnUnitInactiveSec= in systemd.timer(5) for details. These options are shortcuts for --timer-property= with the relevant properties. These options may not be combined with --scope or --pty. --on-calendar= Defines a calendar timer for starting the specified command. See OnCalendar= in systemd.timer(5). This option is a shortcut for --timer-property=OnCalendar=. This option may not be combined with --scope or --pty. --on-clock-change, --on-timezone-change Defines a trigger based on system clock jumps or timezone changes for starting the specified command. See OnClockChange= and OnTimezoneChange= in systemd.timer(5). These options are shortcuts for --timer-property=OnClockChange=yes and --timer-property=OnTimezoneChange=yes. These options may not be combined with --scope or --pty. --path-property=, --socket-property=, --timer-property= Sets a property on the path, socket, or timer unit that is created. This option is similar to --property=, but applies to the transient path, socket, or timer unit rather than the transient service unit created. This option takes an assignment in the same format as systemctl(1)'s set-property command. These options may not be combined with --scope or --pty. --no-block Do not synchronously wait for the unit start operation to finish. If this option is not specified, the start request for the transient unit will be verified, enqueued and systemd-run will wait until the unit's start-up is completed. By passing this argument, it is only verified and enqueued. This option may not be combined with --wait. --wait Synchronously wait for the transient service to terminate. If this option is specified, the start request for the transient unit is verified, enqueued, and waited for. Subsequently the invoked unit is monitored, and it is waited until it is deactivated again (most likely because the specified command completed). On exit, terse information about the unit's runtime is shown, including total runtime (as well as CPU usage, if --property=CPUAccounting=1 was set) and the exit code and status of the main process. This output may be suppressed with --quiet. This option may not be combined with --no-block, --scope or the various path, socket, or timer options. -G, --collect Unload the transient unit after it completed, even if it failed. Normally, without this option, all units that ran and failed are kept in memory until the user explicitly resets their failure state with systemctl reset-failed or an equivalent command. On the other hand, units that ran successfully are unloaded immediately. If this option is turned on the "garbage collection" of units is more aggressive, and unloads units regardless if they exited successfully or failed. This option is a shortcut for --property=CollectMode=inactive-or-failed, see the explanation for CollectMode= in systemd.unit(5) for further information. --user Talk to the service manager of the calling user, rather than the service manager of the system. --system Talk to the service manager of the system. This is the implied default. -H, --host= Execute the operation remotely. Specify a hostname, or a username and hostname separated by "@", to connect to. The hostname may optionally be suffixed by a port ssh is listening on, separated by ":", and then a container name, separated by "/", which connects directly to a specific container on the specified host. This will use SSH to talk to the remote machine manager instance. Container names may be enumerated with machinectl -H HOST. Put IPv6 addresses in brackets. -M, --machine= Execute operation on a local container. Specify a container name to connect to, optionally prefixed by a user name to connect as and a separating "@" character. If the special string ".host" is used in place of the container name, a connection to the local system is made (which is useful to connect to a specific user's user bus: "--user --machine=lennart@.host"). If the "@" syntax is not used, the connection is made as root user. If the "@" syntax is used either the left hand side or the right hand side may be omitted (but not both) in which case the local user name and ".host" are implied. -h, --help Print a short help text and exit. --version Print a short version string and exit. All command line arguments after the first non-option argument become part of the command line of the launched process. | # systemd-run
> Run programs in transient scope units, service units, or path-, socket-, or
> timer-triggered service units. More information:
> https://www.freedesktop.org/software/systemd/man/systemd-run.html.
* Start a transient service:
`sudo systemd-run {{command}} {{argument1 argument2 ...}}`
* Start a transient service under the service manager of the current user (no privileges):
`systemd-run --user {{command}} {{argument1 argument2 ...}}`
* Start a transient service with a custom unit name and description:
`sudo systemd-run --unit={{name}} --description={{string}} {{command}}
{{argument1 argument2 ...}}`
* Start a transient service that does not get cleaned up after it terminates with a custom environment variable:
`sudo systemd-run --remain-after-exit --set-env={{name}}={{value}} {{command}}
{{argument1 argument2 ...}}`
* Start a transient timer that periodically runs its transient service (see `man systemd.time` for calendar event format):
`sudo systemd-run --on-calendar={{calendar_event}} {{command}} {{argument1
argument2 ...}}`
* Share the terminal with the program (allowing interactive input/output) and make sure the execution details remain after the program exits:
`systemd-run --remain-after-exit --pty {{command}}`
* Set properties (e.g. CPUQuota, MemoryMax) of the process and wait until it exits:
`systemd-run --property MemoryMax={{memory_in_bytes}} --property
CPUQuota={{percentage_of_CPU_time}}% --wait {{command}}`
* Use the program in a shell pipeline:
`{{command1}} | systemd-run --pipe {{command2}} | {{command3}}` |
tput | The @TPUT@ utility uses the terminfo database to make the values of terminal-dependent capabilities and information available to the shell (see sh(1)), to initialize or reset the terminal, or return the long name of the requested terminal type. The result depends upon the capability's type: string @TPUT@ writes the string to the standard output. No trailing newline is supplied. integer @TPUT@ writes the decimal value to the standard output, with a trailing newline. boolean @TPUT@ simply sets the exit code (0 for TRUE if the terminal has the capability, 1 for FALSE if it does not), and writes nothing to the standard output. Before using a value returned on the standard output, the application should test the exit code (e.g., $?, see sh(1)) to be sure it is 0. (See the EXIT CODES and DIAGNOSTICS sections.) For a complete list of capabilities and the capname associated with each, see terminfo(5). Options -S allows more than one capability per invocation of @TPUT@. The capabilities must be passed to @TPUT@ from the standard input instead of from the command line (see example). Only one capname is allowed per line. The -S option changes the meaning of the 0 and 1 boolean and string exit codes (see the EXIT CODES section). Because some capabilities may use string parameters rather than numbers, @TPUT@ uses a table and the presence of parameters in its input to decide whether to use tparm(3X), and how to interpret the parameters. -Ttype indicates the type of terminal. Normally this option is unnecessary, because the default is taken from the environment variable TERM. If -T is specified, then the shell variables LINES and COLUMNS will also be ignored. -V reports the version of ncurses which was used in this program, and exits. -x do not attempt to clear the terminal's scrollback buffer using the extended “E3” capability. Commands A few commands (init, reset and longname) are special; they are defined by the @TPUT@ program. The others are the names of capabilities from the terminal database (see terminfo(5) for a list). Although init and reset resemble capability names, @TPUT@ uses several capabilities to perform these special functions. capname indicates the capability from the terminal database. If the capability is a string that takes parameters, the arguments following the capability will be used as parameters for the string. Most parameters are numbers. Only a few terminal capabilities require string parameters; @TPUT@ uses a table to decide which to pass as strings. Normally @TPUT@ uses tparm(3X) to perform the substitution. If no parameters are given for the capability, @TPUT@ writes the string without performing the substitution. init If the terminal database is present and an entry for the user's terminal exists (see -Ttype, above), the following will occur: (1) first, @TPUT@ retrieves the current terminal mode settings for your terminal. It does this by successively testing • the standard error, • standard output, • standard input and • ultimately “/dev/tty” to obtain terminal settings. Having retrieved these settings, @TPUT@ remembers which file descriptor to use when updating settings. (2) if the window size cannot be obtained from the operating system, but the terminal description (or environment, e.g., LINES and COLUMNS variables specify this), update the operating system's notion of the window size. (3) the terminal modes will be updated: • any delays (e.g., newline) specified in the entry will be set in the tty driver, • tabs expansion will be turned on or off according to the specification in the entry, and • if tabs are not expanded, standard tabs will be set (every 8 spaces). (4) if present, the terminal's initialization strings will be output as detailed in the terminfo(5) section on Tabs and Initialization, (5) output is flushed. If an entry does not contain the information needed for any of these activities, that activity will silently be skipped. reset This is similar to init, with two differences: (1) before any other initialization, the terminal modes will be reset to a “sane” state: • set cooked and echo modes, • turn off cbreak and raw modes, • turn on newline translation and • reset any unset special characters to their default values (2) Instead of putting out initialization strings, the terminal's reset strings will be output if present (rs1, rs2, rs3, rf). If the reset strings are not present, but initialization strings are, the initialization strings will be output. Otherwise, reset acts identically to init. longname If the terminal database is present and an entry for the user's terminal exists (see -Ttype above), then the long name of the terminal will be put out. The long name is the last name in the first line of the terminal's description in the terminfo database [see term(5)]. Aliases @TPUT@ handles the clear, init and reset commands specially: it allows for the possibility that it is invoked by a link with those names. If @TPUT@ is invoked by a link named reset, this has the same effect as @TPUT@ reset. The @TSET@(1) utility also treats a link named reset specially. Before ncurses 6.1, the two utilities were different from each other: • @TSET@ utility reset the terminal modes and special characters (not done with @TPUT@). • On the other hand, @TSET@'s repertoire of terminal capabilities for resetting the terminal was more limited, i.e., only reset_1string, reset_2string and reset_file in contrast to the tab-stops and margins which are set by this utility. • The reset program is usually an alias for @TSET@, because of this difference with resetting terminal modes and special characters. With the changes made for ncurses 6.1, the reset feature of the two programs is (mostly) the same. A few differences remain: • The @TSET@ program waits one second when resetting, in case it happens to be a hardware terminal. • The two programs write the terminal initialization strings to different streams (i.e., the standard error for @TSET@ and the standard output for @TPUT@). Note: although these programs write to different streams, redirecting their output to a file will capture only part of their actions. The changes to the terminal modes are not affected by redirecting the output. If @TPUT@ is invoked by a link named init, this has the same effect as @TPUT@ init. Again, you are less likely to use that link because another program named init has a more well- established use. Terminal Size Besides the special commands (e.g., clear), @TPUT@ treats certain terminfo capabilities specially: lines and cols. @TPUT@ calls setupterm(3X) to obtain the terminal size: • first, it gets the size from the terminal database (which generally is not provided for terminal emulators which do not have a fixed window size) • then it asks the operating system for the terminal's size (which generally works, unless connecting via a serial line which does not support NAWS: negotiations about window size). • finally, it inspects the environment variables LINES and COLUMNS which may override the terminal size. If the -T option is given @TPUT@ ignores the environment variables by calling use_tioctl(TRUE), relying upon the operating system (or finally, the terminal database). | # tput
> View and modify terminal settings and capabilities. More information:
> https://manned.org/tput.
* Move the cursor to a screen location:
`tput cup {{row}} {{column}}`
* Set foreground (af) or background (ab) color:
`tput {{setaf|setab}} {{ansi_color_code}}`
* Show number of columns, lines, or colors:
`tput {{cols|lines|colors}}`
* Ring the terminal bell:
`tput bel`
* Reset all terminal attributes:
`tput sgr0`
* Enable or disable word wrap:
`tput {{smam|rmam}}` |
link | Call the link function to create a link named FILE2 to an existing FILE1. --help display this help and exit --version output version information and exit | # link
> Create a hard link to an existing file. For more options, see the `ln`
> command. More information: https://www.gnu.org/software/coreutils/link.
* Create a hard link from a new file to an existing file:
`link {{path/to/existing_file}} {{path/to/new_file}}` |
logname | The logname utility shall write the user's login name to standard output. The login name shall be the string that would be returned by the getlogin() function defined in the System Interfaces volume of POSIX.1‐2017. Under the conditions where the getlogin() function would fail, the logname utility shall write a diagnostic message to standard error and exit with a non-zero exit status. None. | # logname
> Shows the user's login name. More information:
> https://www.gnu.org/software/coreutils/logname.
* Display the currently logged in user's name:
`logname` |
iconv | The iconv utility shall convert the encoding of characters in file from one codeset to another and write the results to standard output. When the options indicate that charmap files are used to specify the codesets (see OPTIONS), the codeset conversion shall be accomplished by performing a logical join on the symbolic character names in the two charmaps. The implementation need not support the use of charmap files for codeset conversion unless the POSIX2_LOCALEDEF symbol is defined on the system. The iconv utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following options shall be supported: -c Omit any characters that are invalid in the codeset of the input file from the output. When -c is not used, the results of encountering invalid characters in the input stream (either those that are not characters in the codeset of the input file or that have no corresponding character in the codeset of the output file) shall be specified in the system documentation. The presence or absence of -c shall not affect the exit status of iconv. -f fromcodeset Identify the codeset of the input file. The implementation shall recognize the following two forms of the fromcodeset option-argument: fromcode The fromcode option-argument must not contain a <slash> character. It shall be interpreted as the name of one of the codeset descriptions provided by the implementation in an unspecified format. Valid values of fromcode are implementation-defined. frommap The frommap option-argument must contain a <slash> character. It shall be interpreted as the pathname of a charmap file as defined in the Base Definitions volume of POSIX.1‐2017, Section 6.4, Character Set Description File. If the pathname does not represent a valid, readable charmap file, the results are undefined. If this option is omitted, the codeset of the current locale shall be used. -l Write all supported fromcode and tocode values to standard output in an unspecified format. -s Suppress any messages written to standard error concerning invalid characters. When -s is not used, the results of encountering invalid characters in the input stream (either those that are not valid characters in the codeset of the input file or that have no corresponding character in the codeset of the output file) shall be specified in the system documentation. The presence or absence of -s shall not affect the exit status of iconv. -t tocodeset Identify the codeset to be used for the output file. The implementation shall recognize the following two forms of the tocodeset option-argument: tocode The semantics shall be equivalent to the -f fromcode option. tomap The semantics shall be equivalent to the -f frommap option. If this option is omitted, the codeset of the current locale shall be used. If either -f or -t represents a charmap file, but the other does not (or is omitted), or both -f and -t are omitted, the results are undefined. | # iconv
> Converts text from one encoding to another. More information:
> https://manned.org/iconv.
* Convert file to a specific encoding, and print to `stdout`:
`iconv -f {{from_encoding}} -t {{to_encoding}} {{input_file}}`
* Convert file to the current locale's encoding, and output to a file:
`iconv -f {{from_encoding}} {{input_file}} > {{output_file}}`
* List supported encodings:
`iconv -l` |
paste | The paste utility shall concatenate the corresponding lines of the given input files, and write the resulting lines to standard output. The default operation of paste shall concatenate the corresponding lines of the input files. The <newline> of every line except the line from the last input file shall be replaced with a <tab>. If an end-of-file condition is detected on one or more input files, but not all input files, paste shall behave as though empty lines were read from the files on which end-of-file was detected, unless the -s option is specified. The paste utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following options shall be supported: -d list Unless a <backslash> character appears in list, each character in list is an element specifying a delimiter character. If a <backslash> character appears in list, the <backslash> character and one or more characters following it are an element specifying a delimiter character as described below. These elements specify one or more delimiters to use, instead of the default <tab>, to replace the <newline> of the input lines. The elements in list shall be used circularly; that is, when the list is exhausted the first element from the list is reused. When the -s option is specified: * The last <newline> in a file shall not be modified. * The delimiter shall be reset to the first element of list after each file operand is processed. When the -s option is not specified: * The <newline> characters in the file specified by the last file operand shall not be modified. * The delimiter shall be reset to the first element of list each time a line is processed from each file. If a <backslash> character appears in list, it and the character following it shall be used to represent the following delimiter characters: \n <newline>. \t <tab>. \\ <backslash> character. \0 Empty string (not a null character). If '\0' is immediately followed by the character 'x', the character 'X', or any character defined by the LC_CTYPE digit keyword (see the Base Definitions volume of POSIX.1‐2017, Chapter 7, Locale), the results are unspecified. If any other characters follow the <backslash>, the results are unspecified. -s Concatenate all of the lines from each input file into one line of output per file, in command line order. The <newline> of every line except the last line in each input file shall be replaced with a <tab>, unless otherwise specified by the -d option. If an input file is empty, the output line corresponding to that file shall consist of only a <newline> character. | # paste
> Merge lines of files. More information:
> https://www.gnu.org/software/coreutils/paste.
* Join all the lines into a single line, using TAB as delimiter:
`paste -s {{path/to/file}}`
* Join all the lines into a single line, using the specified delimiter:
`paste -s -d {{delimiter}} {{path/to/file}}`
* Merge two files side by side, each in its column, using TAB as delimiter:
`paste {{file1}} {{file2}}`
* Merge two files side by side, each in its column, using the specified delimiter:
`paste -d {{delimiter}} {{file1}} {{file2}}`
* Merge two files, with lines added alternatively:
`paste -d '\n' {{file1}} {{file2}}` |
ls | For each operand that names a file of a type other than directory or symbolic link to a directory, ls shall write the name of the file as well as any requested, associated information. For each operand that names a file of type directory, ls shall write the names of files contained within the directory as well as any requested, associated information. Filenames beginning with a <period> ('.') and any associated information shall not be written out unless explicitly referenced, the -A or -a option is supplied, or an implementation-defined condition causes them to be written. If one or more of the -d, -F, or -l options are specified, and neither the -H nor the -L option is specified, for each operand that names a file of type symbolic link to a directory, ls shall write the name of the file as well as any requested, associated information. If none of the -d, -F, or -l options are specified, or the -H or -L options are specified, for each operand that names a file of type symbolic link to a directory, ls shall write the names of files contained within the directory as well as any requested, associated information. In each case where the names of files contained within a directory are written, if the directory contains any symbolic links then ls shall evaluate the file information and file type to be those of the symbolic link itself, unless the -L option is specified. If no operands are specified, ls shall behave as if a single operand of dot ('.') had been specified. If more than one operand is specified, ls shall write non-directory operands first; it shall sort directory and non-directory operands separately according to the collating sequence in the current locale. Whenever ls sorts filenames or pathnames according to the collating sequence in the current locale, if this collating sequence does not have a total ordering of all characters (see the Base Definitions volume of POSIX.1‐2017, Section 7.3.2, LC_COLLATE), then any filenames or pathnames that collate equally should be further compared byte-by-byte using the collating sequence for the POSIX locale. The ls utility shall detect infinite loops; that is, entering a previously visited directory that is an ancestor of the last file encountered. When it detects an infinite loop, ls shall write a diagnostic message to standard error and shall either recover its position in the hierarchy or terminate. The ls utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following options shall be supported: -A Write out all directory entries, including those whose names begin with a <period> ('.') but excluding the entries dot and dot-dot (if they exist). -C Write multi-text-column output with entries sorted down the columns, according to the collating sequence. The number of text columns and the column separator characters are unspecified, but should be adapted to the nature of the output device. This option disables long format output. -F Do not follow symbolic links named as operands unless the -H or -L options are specified. Write a <slash> ('/') immediately after each pathname that is a directory, an <asterisk> ('*') after each that is executable, a <vertical-line> ('|') after each that is a FIFO, and an at-sign ('@') after each that is a symbolic link. For other file types, other symbols may be written. -H Evaluate the file information and file type for symbolic links specified on the command line to be those of the file referenced by the link, and not the link itself; however, ls shall write the name of the link itself and not the file referenced by the link. -L Evaluate the file information and file type for all symbolic links (whether named on the command line or encountered in a file hierarchy) to be those of the file referenced by the link, and not the link itself; however, ls shall write the name of the link itself and not the file referenced by the link. When -L is used with -l, write the contents of symbolic links in the long format (see the STDOUT section). -R Recursively list subdirectories encountered. When a symbolic link to a directory is encountered, the directory shall not be recursively listed unless the -L option is specified. The use of -R with -d or -f produces unspecified results. -S Sort with the primary key being file size (in decreasing order) and the secondary key being filename in the collating sequence (in increasing order). -a Write out all directory entries, including those whose names begin with a <period> ('.'). -c Use time of last modification of the file status information (see the Base Definitions volume of POSIX.1‐2017, sys_stat.h(0p)) instead of last modification of the file itself for sorting (-t) or writing (-l). -d Do not follow symbolic links named as operands unless the -H or -L options are specified. Do not treat directories differently than other types of files. The use of -d with -R or -f produces unspecified results. -f List the entries in directory operands in the order they appear in the directory. The behavior for non- directory operands is unspecified. This option shall turn on -a. When -f is specified, any occurrences of the -r, -S, and -t options shall be ignored and any occurrences of the -A, -g, -l, -n, -o, and -s options may be ignored. The use of -f with -R or -d produces unspecified results. -g Turn on the -l (ell) option, but disable writing the file's owner name or number. Disable the -C, -m, and -x options. -i For each file, write the file's file serial number (see stat() in the System Interfaces volume of POSIX.1‐2017). -k Set the block size for the -s option and the per- directory block count written for the -l, -n, -s, -g, and -o options (see the STDOUT section) to 1024 bytes. -l (The letter ell.) Do not follow symbolic links named as operands unless the -H or -L options are specified. Write out in long format (see the STDOUT section). Disable the -C, -m, and -x options. -m Stream output format; list pathnames across the page, separated by a <comma> character followed by a <space> character. Use a <newline> character as the list terminator and after the separator sequence when there is not room on a line for the next list entry. This option disables long format output. -n Turn on the -l (ell) option, but when writing the file's owner or group, write the file's numeric UID or GID rather than the user or group name, respectively. Disable the -C, -m, and -x options. -o Turn on the -l (ell) option, but disable writing the file's group name or number. Disable the -C, -m, and -x options. -p Write a <slash> ('/') after each filename if that file is a directory. -q Force each instance of non-printable filename characters and <tab> characters to be written as the <question-mark> ('?') character. Implementations may provide this option by default if the output is to a terminal device. -r Reverse the order of the sort to get reverse collating sequence oldest first, or smallest file size first depending on the other options given. -s Indicate the total number of file system blocks consumed by each file displayed. If the -k option is also specified, the block size shall be 1024 bytes; otherwise, the block size is implementation-defined. -t Sort with the primary key being time modified (most recently modified first) and the secondary key being filename in the collating sequence. For a symbolic link, the time used as the sort key is that of the symbolic link itself, unless ls is evaluating its file information to be that of the file referenced by the link (see the -H and -L options). -u Use time of last access (see the Base Definitions volume of POSIX.1‐2017, sys_stat.h(0p)) instead of last modification of the file for sorting (-t) or writing (-l). -x The same as -C, except that the multi-text-column output is produced with entries sorted across, rather than down, the columns. This option disables long format output. -1 (The numeric digit one.) Force output to be one entry per line. This option does not disable long format output. (Long format output is enabled by -g, -l (ell), -n, and -o; and disabled by -C, -m, and -x.) If an option that enables long format output (-g, -l (ell), -n, and -o is given with an option that disables long format output (-C, -m, and -x), this shall not be considered an error. The last of these options specified shall determine whether long format output is written. If -R, -d, or -f are specified, the results of specifying these mutually-exclusive options are specified by the descriptions of these options above. If more than one of any of the other options shown in the SYNOPSIS section in mutually-exclusive sets are given, this shall not be considered an error; the last option specified in each set shall determine the output. Note that if -t is specified, -c and -u are not only mutually- exclusive with each other, they are also mutually-exclusive with -S when determining sort order. But even if -S is specified after all occurrences of -c, -t, and -u, the last use of -c or -u determines the timestamp printed when producing long format output. | # ls
> List directory contents. More information:
> https://www.gnu.org/software/coreutils/ls.
* List files one per line:
`ls -1`
* List all files, including hidden files:
`ls -a`
* List all files, with trailing `/` added to directory names:
`ls -F`
* Long format list (permissions, ownership, size, and modification date) of all files:
`ls -la`
* Long format list with size displayed using human-readable units (KiB, MiB, GiB):
`ls -lh`
* Long format list sorted by size (descending):
`ls -lS`
* Long format list of all files, sorted by modification date (oldest first):
`ls -ltr`
* Only list directories:
`ls -d */` |
mktemp | Create a temporary file or directory, safely, and print its name. TEMPLATE must contain at least 3 consecutive 'X's in last component. If TEMPLATE is not specified, use tmp.XXXXXXXXXX, and --tmpdir is implied. Files are created u+rw, and directories u+rwx, minus umask restrictions. -d, --directory create a directory, not a file -u, --dry-run do not create anything; merely print a name (unsafe) -q, --quiet suppress diagnostics about file/dir-creation failure --suffix=SUFF append SUFF to TEMPLATE; SUFF must not contain a slash. This option is implied if TEMPLATE does not end in X -p DIR, --tmpdir[=DIR] interpret TEMPLATE relative to DIR; if DIR is not specified, use $TMPDIR if set, else /tmp. With this option, TEMPLATE must not be an absolute name; unlike with -t, TEMPLATE may contain slashes, but mktemp creates only the final component -t interpret TEMPLATE as a single file name component, relative to a directory: $TMPDIR, if set; else the directory specified via -p; else /tmp [deprecated] --help display this help and exit --version output version information and exit | # mktemp
> Create a temporary file or directory. More information:
> https://ss64.com/osx/mktemp.html.
* Create an empty temporary file and print the absolute path to it:
`mktemp`
* Create an empty temporary file with a given suffix and print the absolute path to file:
`mktemp --suffix "{{.ext}}"`
* Create a temporary directory and print the absolute path to it:
`mktemp -d` |
git-range-diff | This command shows the differences between two versions of a patch series, or more generally, two commit ranges (ignoring merge commits). In the presence of <path> arguments, these commit ranges are limited accordingly. To that end, it first finds pairs of commits from both commit ranges that correspond with each other. Two commits are said to correspond when the diff between their patches (i.e. the author information, the commit message and the commit diff) is reasonably small compared to the patches' size. See ``Algorithm`` below for details. Finally, the list of matching commits is shown in the order of the second commit range, with unmatched commits being inserted just after all of their ancestors have been shown. There are three ways to specify the commit ranges: • <range1> <range2>: Either commit range can be of the form <base>..<rev>, <rev>^! or <rev>^-<n>. See SPECIFYING RANGES in gitrevisions(7) for more details. • <rev1>...<rev2>. This is equivalent to <rev2>..<rev1> <rev1>..<rev2>. • <base> <rev1> <rev2>: This is equivalent to <base>..<rev1> <base>..<rev2>. --no-dual-color When the commit diffs differ, ‘git range-diff` recreates the original diffs’ coloring, and adds outer -/+ diff markers with the background being red/green to make it easier to see e.g. when there was a change in what exact lines were added. Additionally, the commit diff lines that are only present in the first commit range are shown "dimmed" (this can be overridden using the color.diff.<slot> config setting where <slot> is one of contextDimmed, oldDimmed and newDimmed), and the commit diff lines that are only present in the second commit range are shown in bold (which can be overridden using the config settings color.diff.<slot> with <slot> being one of contextBold, oldBold or newBold). This is known to range-diff as "dual coloring". Use --no-dual-color to revert to color all lines according to the outer diff markers (and completely ignore the inner diff when it comes to color). --creation-factor=<percent> Set the creation/deletion cost fudge factor to <percent>. Defaults to 60. Try a larger value if git range-diff erroneously considers a large change a total rewrite (deletion of one commit and addition of another), and a smaller one in the reverse case. See the ``Algorithm`` section below for an explanation why this is needed. --left-only Suppress commits that are missing from the first specified range (or the "left range" when using the <rev1>...<rev2> format). --right-only Suppress commits that are missing from the second specified range (or the "right range" when using the <rev1>...<rev2> format). --[no-]notes[=<ref>] This flag is passed to the git log program (see git-log(1)) that generates the patches. <range1> <range2> Compare the commits specified by the two ranges, where <range1> is considered an older version of <range2>. <rev1>...<rev2> Equivalent to passing <rev2>..<rev1> and <rev1>..<rev2>. <base> <rev1> <rev2> Equivalent to passing <base>..<rev1> and <base>..<rev2>. Note that <base> does not need to be the exact branch point of the branches. Example: after rebasing a branch my-topic, git range-diff my-topic@{u} my-topic@{1} my-topic would show the differences introduced by the rebase. git range-diff also accepts the regular diff options (see git-diff(1)), most notably the --color=[<when>] and --no-color options. These options are used when generating the "diff between patches", i.e. to compare the author, commit message and diff of corresponding old/new commits. There is currently no means to tweak most of the diff options passed to git log when generating those patches. | # git range-diff
> Compare two commit ranges (e.g. two versions of a branch). More information:
> https://git-scm.com/docs/git-range-diff.
* Diff the changes of two individual commits:
`git range-diff {{commit_1}}^! {{commit_2}}^!`
* Diff the changes of ours and theirs from their common ancestor, e.g. after an interactive rebase:
`git range-diff {{theirs}}...{{ours}}`
* Diff the changes of two commit ranges, e.g. to check whether conflicts have been resolved appropriately when rebasing commits from `base1` to `base2`:
`git range-diff {{base1}}..{{rev1}} {{base2}}..{{rev2}}` |
quilt | Quilt is a tool to manage large sets of patches by keeping track of the changes each patch makes. Patches can be applied, unapplied, refreshed, and so forth. The key philosophical concept is that your primary working material is patches. With quilt, all work occurs within a single directory tree. Commands can be invoked from anywhere within the source tree. Like CVS, Subversion, or Git, quilt takes commands of the form “quilt command”. A command can be truncated (abbreviated) as long as the specified part of the command is unambiguous. If command is ambiguously short, quilt lists all commands matching that prefix and exits. All commands print a brief contextual help message and exit if given the “-h” option. Quilt manages a stack of patches. Patches are applied incrementally on top of the base tree plus all preceding patches. They can be pushed onto the stack (“quilt push”), and popped off the stack (“quilt pop”). Commands are available for querying the contents of the stack (“quilt applied”, “quilt previous”, “quilt top”) and the patches that are not applied at a particular moment (“quilt next”, “quilt unapplied”). By default, most commands apply to the topmost patch on the stack. Patch files are located in the patches subdirectory of the source tree (see Example of working tree, under FILES, below). The QUILT_PATCHES environment variable overrides this default location. When not found in the current directory, that subdirectory is searched recursively in the parent directories (this is similar to the way Git searches for its configuration files). The patches directory may contain subdirectories. It may also be a symbolic link instead of a directory. Quilt creates and maintains a file called series, which defines the order in which patches are applied. The QUILT_SERIES environment variable overrides this default name. You can query the contents of the series file at any time with “quilt series”. In this file, each patch file name is on a separate line. Patch files are identified by path names that are relative to the patches directory; patches may be in subdirectories below this directory. Lines in the series file that start with a hash character (#) are ignored. Patch options, such as the strip level or whether the patch is reversed, can be added after each patch file name. Options are introduced by a space, separated by spaces, and follow the syntax of the patch(1) options (e.g., “-p2”). Quilt records patch options automatically when a command supporting them is used. Without options, strip level 1 is assumed. You can also add a comment after each patch file name and options, introduced by a space followed by a hash character. When quilt adds, removes, or renames patches, it automatically updates the series file. Users of quilt can modify series files while some patches are applied, as long as the applied patches remain in their original order. Unless there are means by which a series file can be generated automatically, you should provide it along with any set of quilt-managed patches you distribute. Different series files can be used to assemble patches in different ways, corresponding (for example) to different development branches. Before a patch is applied, copies of all files the patch modifies are saved to the .pc/patch-name directory, where patch-name is the name of the patch (for example, fix-buffer-overflow.patch). The patch is added to the list of currently applied patches (.pc/applied-patches). Later, when a patch is regenerated (“quilt refresh”), the backup copies in .pc/patch-name are compared with the current versions of the files in the source tree using GNU diff(1). A similar process occurs when starting a new patch (“quilt new”); the new patch file name is added to the series file. A file to be changed by the patch is backed up and opened for editing (“quilt edit”). After editing, inspect the impact of your changes (“quilt diff”); the changes stay local to your working tree until you call “quilt refresh” to write them to the patch file. Documentation related to a patch can be put at the beginning of its patch file (“quilt header”). Quilt is careful to preserve all text that precedes the actual patch when doing a refresh. (This is limited to patches in unified format; see the GNU Diffutils manual.) The series file is looked up in the .pc directory, in the root of the source tree, and in the patches directory. The first series file that is found is used. This may also be a symbolic link, or a file with multiple hard links. Usually, only one series file is used for a set of patches, making the patches subdirectory a convenient location. The .pc directory cannot be relocated, but it can be a symbolic link. Its subdirectories must not be renamed or restructured. While patches are applied to the source tree, this directory is essential for many operations, including popping patches off the stack and refreshing them. Files in the .pc directory are automatically removed when they are no longer needed, so there is no need to clean up manually. Quilt commands reference add [-P patch] {file} ... Add one or more files to the topmost or named patch. Files must be added to the patch before being modified. Files that are modified by patches already applied on top of the specified patch cannot be added. -P patch Patch to add files to. annotate [-P patch] {file} Print an annotated listing of the specified file showing which patches modify which lines. Only applied patches are included. -P patch Stop checking for changes at the specified rather than the topmost patch. applied [patch] Print a list of applied patches, or all patches up to and including the specified patch in the file series. delete [-r] [--backup] [patch|-n] Remove the specified or topmost patch from the series file. If the patch is applied, quilt will attempt to remove it first. (Only the topmost patch can be removed right now.) -n Delete the next patch after topmost, rather than the specified or topmost patch. -r Remove the deleted patch file from the patches directory as well. --backup Rename the patch file to patch~ rather than deleting it. Ignored if not used with `-r'. diff [-p n|-p ab] [-u|-U num|-c|-C num] [--combine patch|-z] [-R] [-P patch] [--snapshot] [--diff=utility] [--no-timestamps] [--no- index] [--sort] [--color[=always|auto|never]] [file ...] Produces a diff of the specified file(s) in the topmost or specified patch. If no files are specified, all files that are modified are included. -p n Create a -p n style patch (-p0 or -p1 are supported). -p ab Create a -p1 style patch, but use a/file and b/file as the original and new filenames instead of the default dir.orig/file and dir/file names. -u, -U num, -c, -C num Create a unified diff (-u, -U) with num lines of context. Create a context diff (-c, -C) with num lines of context. The number of context lines defaults to 3. --no-timestamps Do not include file timestamps in patch headers. --no-index Do not output Index: lines. -z Write to standard output the changes that have been made relative to the topmost or specified patch. -R Create a reverse diff. -P patch Create a diff for the specified patch. (Defaults to the topmost patch.) --combine patch Create a combined diff for all patches between this patch and the patch specified with -P. A patch name of `-' is equivalent to specifying the first applied patch. --snapshot Diff against snapshot (see `quilt snapshot -h'). --diff=utility Use the specified utility for generating the diff. The utility is invoked with the original and new file name as arguments. --color[=always|auto|never] Use syntax coloring (auto activates it only if the output is a tty). --sort Sort files by their name instead of preserving the original order. edit file ... Edit the specified file(s) in $EDITOR after adding it (them) to the topmost patch. files [-v] [-a] [-l] [--combine patch] [patch] Print the list of files that the topmost or specified patch changes. -a List all files in all applied patches. -l Add patch name to output. -v Verbose, more user friendly output. --combine patch Create a listing for all patches between this patch and the topmost or specified patch. A patch name of `-' is equivalent to specifying the first applied patch. fold [-R] [-q] [-f] [-p strip-level] Integrate the patch read from standard input into the topmost patch: After making sure that all files modified are part of the topmost patch, the patch is applied with the specified strip level (which defaults to 1). -R Apply patch in reverse. -q Quiet operation. -f Force apply, even if the patch has rejects. Unless in quiet mode, apply the patch interactively: the patch utility may ask questions. -p strip-level The number of pathname components to strip from file names when applying patchfile. fork [new_name] Fork the topmost patch. Forking a patch means creating a verbatim copy of it under a new name, and use that new name instead of the original one in the current series. This is useful when a patch has to be modified, but the original version of it should be preserved, e.g. because it is used in another series, or for the history. A typical sequence of commands would be: fork, edit, refresh. If new_name is missing, the name of the forked patch will be the current patch name, followed by `-2'. If the patch name already ends in a dash-and-number, the number is further incremented (e.g., patch.diff, patch-2.diff, patch-3.diff). graph [--all] [--reduce] [--lines[=num]] [--edge-labels=files] [-T ps] [patch] Generate a dot(1) directed graph showing the dependencies between applied patches. A patch depends on another patch if both touch the same file or, with the --lines option, if their modifications overlap. Unless otherwise specified, the graph includes all patches that the topmost patch depends on. When a patch name is specified, instead of the topmost patch, create a graph for the specified patch. The graph will include all other patches that this patch depends on, as well as all patches that depend on this patch. --all Generate a graph including all applied patches and their dependencies. (Unapplied patches are not included.) --reduce Eliminate transitive edges from the graph. --lines[=num] Compute dependencies by looking at the lines the patches modify. Unless a different num is specified, two lines of context are included. --edge-labels=files Label graph edges with the file names that the adjacent patches modify. -T ps Directly produce a PostScript output file. grep [-h|options] {pattern} Grep through the source files, recursively, skipping patches and quilt meta-information. If no filename argument is given, the whole source tree is searched. Please see the grep(1) manual page for options. -h Print this help. The grep -h option can be passed after a double-dash (--). Search expressions that start with a dash can be passed after a second double-dash (-- --). header [-a|-r|-e] [--backup] [--strip-diffstat] [--strip- trailing-whitespace] [patch] Print or change the header of the topmost or specified patch. -a, -r, -e Append to (-a) or replace (-r) the exiting patch header, or edit (-e) the header in $EDITOR. If none of these options is given, print the patch header. --strip-diffstat Strip diffstat output from the header. --strip-trailing-whitespace Strip trailing whitespace at the end of lines of the header. --backup Create a backup copy of the old version of a patch as patch~. import [-p num] [-R] [-P patch] [-f] [-d {o|a|n}] patchfile ... Import external patches. The patches will be inserted following the current top patch, and must be pushed after import to apply them. -p num Number of directory levels to strip when applying (default=1) -R Apply patch in reverse. -P patch Patch filename to use inside quilt. This option can only be used when importing a single patch. -f Overwrite/update existing patches. -d {o|a|n} When overwriting in existing patch, keep the old (o), all (a), or new (n) patch header. If both patches include headers, this option must be specified. This option is only effective when -f is used. mail {--mbox file|--send} [-m text] [-M file] [--prefix prefix] [--sender ...] [--from ...] [--to ...] [--cc ...] [--bcc ...] [--subject ...] [--reply-to message] [--charset ...] [--signature file] [first_patch [last_patch]] Create mail messages from a specified range of patches, or all patches in the series file, and either store them in a mailbox file, or send them immediately. The editor is opened with a template for the introduction. Please see /usr/local/share/doc/quilt/README.MAIL for details. When specifying a range of patches, a first patch name of `-' denotes the first, and a last patch name of `-' denotes the last patch in the series. -m text Text to use as the text in the introduction. When this option is used, the editor will not be invoked, and the patches will be processed immediately. -M file Like the -m option, but read the introduction from file. --prefix prefix Use an alternate prefix in the bracketed part of the subjects generated. Defaults to `patch'. --mbox file Store all messages in the specified file in mbox format. The mbox can later be sent using formail, for example. --send Send the messages directly. --sender The envelope sender address to use. The address must be of the form `user@domain.name'. No display name is allowed. --from, --subject The values for the From and Subject headers to use. If no --from option is given, the value of the --sender option is used. --to, --cc, --bcc Append a recipient to the To, Cc, or Bcc header. --charset Specify a particular message encoding on systems which don't use UTF-8 or ISO-8859-15. This character encoding must match the one used in the patches. --signature file Append the specified signature to messages (defaults to ~/.signature if found; use `-' for no signature). --reply-to message Add the appropriate headers to reply to the specified message. new [-p n|-p ab] {patchname} Create a new patch with the specified file name, and insert it after the topmost patch. The name can be prefixed with a sub-directory name, allowing for grouping related patches together. -p n Create a -p n style patch (-p0 or -p1 are supported). -p ab Create a -p1 style patch, but use a/file and b/file as the original and new filenames instead of the default dir.orig/file and dir/file names. Quilt can be used in sub-directories of a source tree. It determines the root of a source tree by searching for a directory above the current working directory. Create a directory in the intended root directory if quilt chooses a top-level directory that is too high up in the directory tree. next [patch] Print the name of the next patch after the specified or topmost patch in the series file. patches [-v] [--color[=always|auto|never]] {file} [files...] Print the list of patches that modify any of the specified files. (Uses a heuristic to determine which files are modified by unapplied patches. Note that this heuristic is much slower than scanning applied patches.) -v Verbose, more user friendly output. --color[=always|auto|never] Use syntax coloring (auto activates it only if the output is a tty). pop [-afRqv] [--refresh] [num|patch] Remove patch(es) from the stack of applied patches. Without options, the topmost patch is removed. When a number is specified, remove the specified number of patches. When a patch name is specified, remove patches until the specified patch end up on top of the stack. Patch names may include the patches/ prefix, which means that filename completion can be used. -a Remove all applied patches. -f Force remove. The state before the patch(es) were applied will be restored from backup files. -R Always verify if the patch removes cleanly; don't rely on timestamp checks. -q Quiet operation. -v Verbose operation. --refresh Automatically refresh every patch before it gets unapplied. previous [patch] Print the name of the previous patch before the specified or topmost patch in the series file. push [-afqvm] [--fuzz=N] [--merge[=merge|diff3]] [--leave- rejects] [--color[=always|auto|never]] [--refresh] [num|patch] Apply patch(es) from the series file. Without options, the next patch in the series file is applied. When a number is specified, apply the specified number of patches. When a patch name is specified, apply all patches up to and including the specified patch. Patch names may include the patches/ prefix, which means that filename completion can be used. -a Apply all patches in the series file. -q Quiet operation. -f Force apply, even if the patch has rejects. -v Verbose operation. --fuzz=N Set the maximum fuzz factor (default: 2). -m, --merge[=merge|diff3] Merge the patch file into the original files (see patch(1)). --leave-rejects Leave around the reject files patch produced, even if the patch is not actually applied. --color[=always|auto|never] Use syntax coloring (auto activates it only if the output is a tty). --refresh Automatically refresh every patch after it was successfully applied. refresh [-p n|-p ab] [-u|-U num|-c|-C num] [-z[new_name]] [-f] [--no-timestamps] [--no-index] [--diffstat] [--sort] [--backup] [--strip-trailing-whitespace] [patch] Refreshes the specified patch, or the topmost patch by default. Documentation that comes before the actual patch in the patch file is retained. It is possible to refresh patches that are not on top. If any patches on top of the patch to refresh modify the same files, the script aborts by default. Patches can still be refreshed with -f. In that case this script will print a warning for each shadowed file, changes by more recent patches will be ignored, and only changes in files that have not been modified by any more recent patches will end up in the specified patch. -p n Create a -p n style patch (-p0 or -p1 supported). -p ab Create a -p1 style patch, but use a/file and b/file as the original and new filenames instead of the default dir.orig/file and dir/file names. -u, -U num, -c, -C num Create a unified diff (-u, -U) with num lines of context. Create a context diff (-c, -C) with num lines of context. The number of context lines defaults to 3. -z[new_name] Create a new patch containing the changes instead of refreshing the topmost patch. If no new name is specified, `-2' is added to the original patch name, etc. (See the fork command.) --no-timestamps Do not include file timestamps in patch headers. --no-index Do not output Index: lines. --diffstat Add a diffstat section to the patch header, or replace the existing diffstat section. -f Enforce refreshing of a patch that is not on top. --backup Create a backup copy of the old version of a patch as patch~. --sort Sort files by their name instead of preserving the original order. --strip-trailing-whitespace Strip trailing whitespace at the end of lines. remove [-P patch] {file} ... Remove one or more files from the topmost or named patch. Files that are modified by patches on top of the specified patch cannot be removed. -P patch Remove named files from the named patch. rename [-P patch] new_name Rename the topmost or named patch. -P patch Patch to rename. revert [-P patch] {file} ... Revert uncommitted changes to the topmost or named patch for the specified file(s): after the revert, 'quilt diff -z' will show no differences for those files. Changes to files that are modified by patches on top of the specified patch cannot be reverted. -P patch Revert changes in the named patch. series [--color[=always|auto|never]] [-v] Print the names of all patches in the series file. --color[=always|auto|never] Use syntax coloring (auto activates it only if the output is a tty). -v Verbose, more user friendly output. setup [-d path-prefix] [-v] [--sourcedir dir] [--fuzz=N] [--slow|--fast] {specfile|seriesfile} Initializes a source tree from an rpm spec file or a quilt series file. -d Optional path prefix for the resulting source tree. --sourcedir Directory that contains the package sources. Defaults to `.'. -v Verbose debug output. --fuzz=N Set the maximum fuzz factor (needs rpm 4.6 or later). --slow Use the original, slow method to process the spec file. In this mode, rpmbuild generates a working tree in a temporary directory while all its actions are recorded, and then everything is replayed from scratch in the target directory. --fast Use the new, faster method to process the spec file. In this mode, rpmbuild is told to generate a working tree directly in the target directory. This is the default (since quilt version 0.67). The setup command is only guaranteed to work properly on spec files where applying all the patches is the last thing done in the %prep section. This is a design limitation due to the fact that quilt can only operate on patches. If other commands in the %prep section modify the patched files, this must happen first, otherwise you won't be able to push the patch series. snapshot [-d] Take a snapshot of the current working state. After taking the snapshot, the tree can be modified in the usual ways, including pushing and popping patches. A diff against the tree at the moment of the snapshot can be generated with `quilt diff --snapshot'. -d Only remove current snapshot. top Print the name of the topmost patch on the current stack of applied patches. unapplied [patch] Print a list of patches that are not applied, or all patches that follow the specified patch in the series file. upgrade Upgrade the meta-data in a working tree from an old version of quilt to the current version. This command is only needed when the quilt meta-data format has changed, and the working tree still contains old-format meta-data. In that case, quilt will request to run `quilt upgrade'. These options are common to all quilt commands. -h Print a usage message (for the given command, if one is specified, otherwise for quilt itself) and exit. --quiltrc file Use file as the configuration file instead of ~/.quiltrc (or /etc/quilt.quiltrc if ~/.quiltrc does not exist). The special value “-” causes quilt not to read any configuration file. --trace Run the command in the shell's trace mode (-x) for debugging of internal operations. --version Print the version number and exit. | # quilt
> Tool to manage a series of patches. More information:
> https://savannah.nongnu.org/projects/quilt.
* Import an existing patch from a file:
`quilt import {{path/to/filename.patch}}`
* Create a new patch:
`quilt new {{filename.patch}}`
* Add a file to the current patch:
`quilt add {{path/to/file}}`
* After editing the file, refresh the current patch with the changes:
`quilt refresh`
* Apply all the patches in the series file:
`quilt push -a`
* Remove all applied patches:
`quilt pop -a` |
nohup | The nohup utility shall invoke the utility named by the utility operand with arguments supplied as the argument operands. At the time the named utility is invoked, the SIGHUP signal shall be set to be ignored. If standard input is associated with a terminal, the nohup utility may redirect standard input from an unspecified file. If the standard output is a terminal, all output written by the named utility to its standard output shall be appended to the end of the file nohup.out in the current directory. If nohup.out cannot be created or opened for appending, the output shall be appended to the end of the file nohup.out in the directory specified by the HOME environment variable. If neither file can be created or opened for appending, utility shall not be invoked. If a file is created, the file's permission bits shall be set to S_IRUSR | S_IWUSR. If standard error is a terminal and standard output is open but is not a terminal, all output written by the named utility to its standard error shall be redirected to the same open file description as the standard output. If standard error is a terminal and standard output either is a terminal or is closed, the same output shall instead be appended to the end of the nohup.out file as described above. None. | # nohup
> Allows for a process to live when the terminal gets killed. More
> information: https://www.gnu.org/software/coreutils/nohup.
* Run a process that can live beyond the terminal:
`nohup {{command}} {{argument1 argument2 ...}}`
* Launch `nohup` in background mode:
`nohup {{command}} {{argument1 argument2 ...}} &`
* Run a shell script that can live beyond the terminal:
`nohup {{path/to/script.sh}} &`
* Run a process and write the output to a specific file:
`nohup {{command}} {{argument1 argument2 ...}} > {{path/to/output_file}} &` |
expand | The expand utility shall write files or the standard input to the standard output with <tab> characters replaced with one or more <space> characters needed to pad to the next tab stop. Any <backspace> characters shall be copied to the output and cause the column position count for tab stop calculations to be decremented; the column position count shall not be decremented below zero. The expand utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following option shall be supported: -t tablist Specify the tab stops. The application shall ensure that the argument tablist consists of either a single positive decimal integer or a list of tabstops. If a single number is given, tabs shall be set that number of column positions apart instead of the default 8. If a list of tabstops is given, the application shall ensure that it consists of a list of two or more positive decimal integers, separated by <blank> or <comma> characters, in ascending order. The <tab> characters shall be set at those specific column positions. Each tab stop N shall be an integer value greater than zero, and the list is in strictly ascending order. This is taken to mean that, from the start of a line of output, tabbing to position N shall cause the next character output to be in the (N+1)th column position on that line. In the event of expand having to process a <tab> at a position beyond the last of those specified in a multiple tab-stop list, the <tab> shall be replaced by a single <space> in the output. | # expand
> Convert tabs to spaces. More information:
> https://www.gnu.org/software/coreutils/expand.
* Convert tabs in each file to spaces, writing to `stdout`:
`expand {{path/to/file}}`
* Convert tabs to spaces, reading from `stdin`:
`expand`
* Do not convert tabs after non blanks:
`expand -i {{path/to/file}}`
* Have tabs a certain number of characters apart, not 8:
`expand -t={{number}} {{path/to/file}}`
* Use a comma separated list of explicit tab positions:
`expand -t={{1,4,6}}` |
strace | In the simplest case strace runs the specified command until it exits. It intercepts and records the system calls which are called by a process and the signals which are received by a process. The name of each system call, its arguments and its return value are printed on standard error or to the file specified with the -o option. strace is a useful diagnostic, instructional, and debugging tool. System administrators, diagnosticians and trouble-shooters will find it invaluable for solving problems with programs for which the source is not readily available since they do not need to be recompiled in order to trace them. Students, hackers and the overly-curious will find that a great deal can be learned about a system and its system calls by tracing even ordinary programs. And programmers will find that since system calls and signals are events that happen at the user/kernel interface, a close examination of this boundary is very useful for bug isolation, sanity checking and attempting to capture race conditions. Each line in the trace contains the system call name, followed by its arguments in parentheses and its return value. An example from stracing the command "cat /dev/null" is: open("/dev/null", O_RDONLY) = 3 Errors (typically a return value of -1) have the errno symbol and error string appended. open("/foo/bar", O_RDONLY) = -1 ENOENT (No such file or directory) Signals are printed as signal symbol and decoded siginfo structure. An excerpt from stracing and interrupting the command "sleep 666" is: sigsuspend([] <unfinished ...> --- SIGINT {si_signo=SIGINT, si_code=SI_USER, si_pid=...} --- +++ killed by SIGINT +++ If a system call is being executed and meanwhile another one is being called from a different thread/process then strace will try to preserve the order of those events and mark the ongoing call as being unfinished. When the call returns it will be marked as resumed. [pid 28772] select(4, [3], NULL, NULL, NULL <unfinished ...> [pid 28779] clock_gettime(CLOCK_REALTIME, {tv_sec=1130322148, tv_nsec=3977000}) = 0 [pid 28772] <... select resumed> ) = 1 (in [3]) Interruption of a (restartable) system call by a signal delivery is processed differently as kernel terminates the system call and also arranges its immediate reexecution after the signal handler completes. read(0, 0x7ffff72cf5cf, 1) = ? ERESTARTSYS (To be restarted) --- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} --- rt_sigreturn({mask=[]}) = 0 read(0, "", 1) = 0 Arguments are printed in symbolic form with passion. This example shows the shell performing ">>xyzzy" output redirection: open("xyzzy", O_WRONLY|O_APPEND|O_CREAT, 0666) = 3 Here, the second and the third argument of open(2) are decoded by breaking down the flag argument into its three bitwise-OR constituents and printing the mode value in octal by tradition. Where the traditional or native usage differs from ANSI or POSIX, the latter forms are preferred. In some cases, strace output is proven to be more readable than the source. Structure pointers are dereferenced and the members are displayed as appropriate. In most cases, arguments are formatted in the most C-like fashion possible. For example, the essence of the command "ls -l /dev/null" is captured as: lstat("/dev/null", {st_mode=S_IFCHR|0666, st_rdev=makedev(0x1, 0x3), ...}) = 0 Notice how the 'struct stat' argument is dereferenced and how each member is displayed symbolically. In particular, observe how the st_mode member is carefully decoded into a bitwise-OR of symbolic and numeric values. Also notice in this example that the first argument to lstat(2) is an input to the system call and the second argument is an output. Since output arguments are not modified if the system call fails, arguments may not always be dereferenced. For example, retrying the "ls -l" example with a non-existent file produces the following line: lstat("/foo/bar", 0xb004) = -1 ENOENT (No such file or directory) In this case the porch light is on but nobody is home. Syscalls unknown to strace are printed raw, with the unknown system call number printed in hexadecimal form and prefixed with "syscall_": syscall_0xbad(0x1, 0x2, 0x3, 0x4, 0x5, 0x6) = -1 ENOSYS (Function not implemented) Character pointers are dereferenced and printed as C strings. Non-printing characters in strings are normally represented by ordinary C escape codes. Only the first strsize (32 by default) bytes of strings are printed; longer strings have an ellipsis appended following the closing quote. Here is a line from "ls -l" where the getpwuid(3) library routine is reading the password file: read(3, "root::0:0:System Administrator:/"..., 1024) = 422 While structures are annotated using curly braces, pointers to basic types and arrays are printed using square brackets with commas separating the elements. Here is an example from the command id(1) on a system with supplementary group ids: getgroups(32, [100, 0]) = 2 On the other hand, bit-sets are also shown using square brackets, but set elements are separated only by a space. Here is the shell, preparing to execute an external command: sigprocmask(SIG_BLOCK, [CHLD TTOU], []) = 0 Here, the second argument is a bit-set of two signals, SIGCHLD and SIGTTOU. In some cases, the bit-set is so full that printing out the unset elements is more valuable. In that case, the bit- set is prefixed by a tilde like this: sigprocmask(SIG_UNBLOCK, ~[], NULL) = 0 Here, the second argument represents the full set of all signals. General -e expr A qualifying expression which modifies which events to trace or how to trace them. The format of the expression is: [qualifier=][!]value[,value]... where qualifier is one of trace (or t), trace-fds (or trace-fd or fd or fds), abbrev (or a), verbose (or v), raw (or x), signal (or signals or s), read (or reads or r), write (or writes or w), fault, inject, status, quiet (or silent or silence or q), secontext, decode-fds (or decode-fd), decode-pids (or decode-pid), or kvm, and value is a qualifier-dependent symbol or number. The default qualifier is trace. Using an exclamation mark negates the set of values. For example, -e open means literally -e trace=open which in turn means trace only the open system call. By contrast, -e trace=!open means to trace every system call except open. In addition, the special values all and none have the obvious meanings. Note that some shells use the exclamation point for history expansion even inside quoted arguments. If so, you must escape the exclamation point with a backslash. Startup -E var=val --env=var=val Run command with var=val in its list of environment variables. -E var --env=var Remove var from the inherited list of environment variables before passing it on to the command. -p pid --attach=pid Attach to the process with the process ID pid and begin tracing. The trace may be terminated at any time by a keyboard interrupt signal (CTRL-C). strace will respond by detaching itself from the traced process(es) leaving it (them) to continue running. Multiple -p options can be used to attach to many processes in addition to command (which is optional if at least one -p option is given). Multiple process IDs, separated by either comma (“,”), space (“ ”), tab, or newline character, can be provided as an argument to a single -p option, so, for example, -p "$(pidof PROG)" and -p "$(pgrep PROG)" syntaxes are supported. -u username --user=username Run command with the user ID, group ID, and supplementary groups of username. This option is only useful when running as root and enables the correct execution of setuid and/or setgid binaries. Unless this option is used setuid and setgid programs are executed without effective privileges. --argv0=name Set argv[0] of the command being executed to name. Useful for tracing multi-call executables which interpret argv[0], such as busybox or kmod. Tracing -b syscall --detach-on=syscall If specified syscall is reached, detach from traced process. Currently, only execve(2) syscall is supported. This option is useful if you want to trace multi-threaded process and therefore require -f, but don't want to trace its (potentially very complex) children. -D --daemonize --daemonize=grandchild Run tracer process as a grandchild, not as the parent of the tracee. This reduces the visible effect of strace by keeping the tracee a direct child of the calling process. -DD --daemonize=pgroup --daemonize=pgrp Run tracer process as tracee's grandchild in a separate process group. In addition to reduction of the visible effect of strace, it also avoids killing of strace with kill(2) issued to the whole process group. -DDD --daemonize=session Run tracer process as tracee's grandchild in a separate session ("true daemonisation"). In addition to reduction of the visible effect of strace, it also avoids killing of strace upon session termination. -f --follow-forks Trace child processes as they are created by currently traced processes as a result of the fork(2), vfork(2) and clone(2) system calls. Note that -p PID -f will attach all threads of process PID if it is multi-threaded, not only thread with thread_id = PID. --output-separately If the --output=filename option is in effect, each processes trace is written to filename.pid where pid is the numeric process id of each process. -ff --follow-forks --output-separately Combine the effects of --follow-forks and --output-separately options. This is incompatible with -c, since no per-process counts are kept. One might want to consider using strace-log-merge(1) to obtain a combined strace log view. -I interruptible --interruptible=interruptible When strace can be interrupted by signals (such as pressing CTRL-C). 1, anywhere no signals are blocked; 2, waiting fatal signals are blocked while decoding syscall (default); 3, never fatal signals are always blocked (default if -o FILE PROG); 4, never_tstp fatal signals and SIGTSTP (CTRL-Z) are always blocked (useful to make strace -o FILE PROG not stop on CTRL-Z, default if -D). --syscall-limit=limit Detach all tracees when limit number of syscalls have been captured. Syscalls filtered out via --trace, --trace-path or --status options are not considered when keeping track of the number of syscalls that are captured. Filtering -e trace=syscall_set -e t=syscall_set --trace=syscall_set Trace only the specified set of system calls. syscall_set is defined as [!]value[,value], and value can be one of the following: syscall Trace specific syscall, specified by its name (see syscalls(2) for a reference, but also see NOTES). ?value Question mark before the syscall qualification allows suppression of error in case no syscalls matched the qualification provided. value@64 Limit the syscall specification described by value to 64-bit personality. value@32 Limit the syscall specification described by value to 32-bit personality. value@x32 Limit the syscall specification described by value to x32 personality. all Trace all system calls. /regex Trace only those system calls that match the regex. You can use POSIX Extended Regular Expression syntax (see regex(7)). %file file Trace all system calls which take a file name as an argument. You can think of this as an abbreviation for -e trace=open,stat,chmod,unlink,... which is useful to seeing what files the process is referencing. Furthermore, using the abbreviation will ensure that you don't accidentally forget to include a call like lstat(2) in the list. Betchya woulda forgot that one. The syntax without a preceding percent sign ("-e trace=file") is deprecated. %process process Trace system calls associated with process lifecycle (creation, exec, termination). The syntax without a preceding percent sign ("-e trace=process") is deprecated. %net %network network Trace all the network related system calls. The syntax without a preceding percent sign ("-e trace=network") is deprecated. %signal signal Trace all signal related system calls. The syntax without a preceding percent sign ("-e trace=signal") is deprecated. %ipc ipc Trace all IPC related system calls. The syntax without a preceding percent sign ("-e trace=ipc") is deprecated. %desc desc Trace all file descriptor related system calls. The syntax without a preceding percent sign ("-e trace=desc") is deprecated. %memory memory Trace all memory mapping related system calls. The syntax without a preceding percent sign ("-e trace=memory") is deprecated. %creds Trace system calls that read or modify user and group identifiers or capability sets. %stat Trace stat syscall variants. %lstat Trace lstat syscall variants. %fstat Trace fstat, fstatat, and statx syscall variants. %%stat Trace syscalls used for requesting file status (stat, lstat, fstat, fstatat, statx, and their variants). %statfs Trace statfs, statfs64, statvfs, osf_statfs, and osf_statfs64 system calls. The same effect can be achieved with -e trace=/^(.*_)?statv?fs regular expression. %fstatfs Trace fstatfs, fstatfs64, fstatvfs, osf_fstatfs, and osf_fstatfs64 system calls. The same effect can be achieved with -e trace=/fstatv?fs regular expression. %%statfs Trace syscalls related to file system statistics (statfs-like, fstatfs-like, and ustat). The same effect can be achieved with -e trace=/statv?fs|fsstat|ustat regular expression. %clock Trace system calls that read or modify system clocks. %pure Trace syscalls that always succeed and have no arguments. Currently, this list includes arc_gettls(2), getdtablesize(2), getegid(2), getegid32(2), geteuid(2), geteuid32(2), getgid(2), getgid32(2), getpagesize(2), getpgrp(2), getpid(2), getppid(2), get_thread_area(2) (on architectures other than x86), gettid(2), get_tls(2), getuid(2), getuid32(2), getxgid(2), getxpid(2), getxuid(2), kern_features(2), and metag_get_tls(2) syscalls. The -c option is useful for determining which system calls might be useful to trace. For example, trace=open,close,read,write means to only trace those four system calls. Be careful when making inferences about the user/kernel boundary if only a subset of system calls are being monitored. The default is trace=all. -e trace-fd=set -e trace-fds=set -e fd=set -e fds=set --trace-fds=set Trace only the syscalls that operate on the specified subset of (non-negative) file descriptors. Note that usage of this option also filters out all the syscalls that do not operate on file descriptors at all. Applies in (inclusive) disjunction with the --trace-path option. -e signal=set -e signals=set -e s=set --signal=set Trace only the specified subset of signals. The default is signal=all. For example, signal=!SIGIO (or signal=!io) causes SIGIO signals not to be traced. -e status=set --status=set Print only system calls with the specified return status. The default is status=all. When using the status qualifier, because strace waits for system calls to return before deciding whether they should be printed or not, the traditional order of events may not be preserved anymore. If two system calls are executed by concurrent threads, strace will first print both the entry and exit of the first system call to exit, regardless of their respective entry time. The entry and exit of the second system call to exit will be printed afterwards. Here is an example when select(2) is called, but a different thread calls clock_gettime(2) before select(2) finishes: [pid 28779] 1130322148.939977 clock_gettime(CLOCK_REALTIME, {1130322148, 939977000}) = 0 [pid 28772] 1130322148.438139 select(4, [3], NULL, NULL, NULL) = 1 (in [3]) set can include the following elements: successful Trace system calls that returned without an error code. The -z option has the effect of status=successful. failed Trace system calls that returned with an error code. The -Z option has the effect of status=failed. unfinished Trace system calls that did not return. This might happen, for example, due to an execve call in a neighbour thread. unavailable Trace system calls that returned but strace failed to fetch the error status. detached Trace system calls for which strace detached before the return. -P path --trace-path=path Trace only system calls accessing path. Multiple -P options can be used to specify several paths. Applies in (inclusive) disjunction with the --trace-fds option. -z --successful-only Print only syscalls that returned without an error code. -Z --failed-only Print only syscalls that returned with an error code. Output format -a column --columns=column Align return values in a specific column (default column 40). -e abbrev=syscall_set -e a=syscall_set --abbrev=syscall_set Abbreviate the output from printing each member of large structures. The syntax of the syscall_set specification is the same as in the -e trace option. The default is abbrev=all. The -v option has the effect of abbrev=none. -e verbose=syscall_set -e v=syscall_set --verbose=syscall_set Dereference structures for the specified set of system calls. The syntax of the syscall_set specification is the same as in the -e trace option. The default is verbose=all. -e raw=syscall_set -e x=syscall_set --raw=syscall_set Print raw, undecoded arguments for the specified set of system calls. The syntax of the syscall_set specification is the same as in the -e trace option. This option has the effect of causing all arguments to be printed in hexadecimal. This is mostly useful if you don't trust the decoding or you need to know the actual numeric value of an argument. See also -X raw option. -e read=set -e reads=set -e r=set --read=set Perform a full hexadecimal and ASCII dump of all the data read from file descriptors listed in the specified set. For example, to see all input activity on file descriptors 3 and 5 use -e read=3,5. Note that this is independent from the normal tracing of the read(2) system call which is controlled by the option -e trace=read. -e write=set -e writes=set -e w=set --write=set Perform a full hexadecimal and ASCII dump of all the data written to file descriptors listed in the specified set. For example, to see all output activity on file descriptors 3 and 5 use -e write=3,5. Note that this is independent from the normal tracing of the write(2) system call which is controlled by the option -e trace=write. -e quiet=set -e silent=set -e silence=set -e q=set --quiet=set --silent=set --silence=set Suppress various information messages. The default is quiet=none. set can include the following elements: attach Suppress messages about attaching and detaching ("[ Process NNNN attached ]", "[ Process NNNN detached ]"). exit Suppress messages about process exits ("+++ exited with SSS +++"). path-resolution Suppress messages about resolution of paths provided via the -P option ("Requested path "..." resolved into "...""). personality Suppress messages about process personality changes ("[ Process PID=NNNN runs in PPP mode. ]"). thread-execve superseded Suppress messages about process being superseded by execve(2) in another thread ("+++ superseded by execve in pid NNNN +++"). -e decode-fds=set --decode-fds=set Decode various information associated with file descriptors. The default is decode-fds=none. set can include the following elements: path Print file paths. Also enables printing of tracee's current working directory when AT_FDCWD constant is used. socket Print socket protocol-specific information, dev Print character/block device numbers. pidfd Print PIDs associated with pidfd file descriptors. signalfd Print signal masks associated with signalfd file descriptors. -e decode-pids=set --decode-pids=set Decode various information associated with process IDs (and also thread IDs, process group IDs, and session IDs). The default is decode-pids=none. set can include the following elements: comm Print command names associated with thread or process IDs. pidns Print thread, process, process group, and session IDs in strace's PID namespace if the tracee is in a different PID namespace. -e kvm=vcpu --kvm=vcpu Print the exit reason of kvm vcpu. Requires Linux kernel version 4.16.0 or higher. -i --instruction-pointer Print the instruction pointer at the time of the system call. -n --syscall-number Print the syscall number. -k --stack-traces Print the execution stack trace of the traced processes after each system call. -o filename --output=filename Write the trace output to the file filename rather than to stderr. filename.pid form is used if -ff option is supplied. If the argument begins with '|' or '!', the rest of the argument is treated as a command and all output is piped to it. This is convenient for piping the debugging output to a program without affecting the redirections of executed programs. The latter is not compatible with -ff option currently. -A --output-append-mode Open the file provided in the -o option in append mode. -q --quiet --quiet=attach,personality Suppress messages about attaching, detaching, and personality changes. This happens automatically when output is redirected to a file and the command is run directly instead of attaching. -qq --quiet=attach,personality,exit Suppress messages attaching, detaching, personality changes, and about process exit status. -qqq --quiet=all Suppress all suppressible messages (please refer to the -e quiet option description for the full list of suppressible messages). -r --relative-timestamps[=precision] Print a relative timestamp upon entry to each system call. This records the time difference between the beginning of successive system calls. precision can be one of s (for seconds), ms (milliseconds), us (microseconds), or ns (nanoseconds), and allows setting the precision of time value being printed. Default is us (microseconds). Note that since -r option uses the monotonic clock time for measuring time difference and not the wall clock time, its measurements can differ from the difference in time reported by the -t option. -s strsize --string-limit=strsize Specify the maximum string size to print (the default is 32). Note that filenames are not considered strings and are always printed in full. --absolute-timestamps[=[[format:]format],[[precision:]precision]] --timestamps[=[[format:]format],[[precision:]precision]] Prefix each line of the trace with the wall clock time in the specified format with the specified precision. format can be one of the following: none No time stamp is printed. Can be used to override the previous setting. time Wall clock time (strftime(3) format string is %T). unix Number of seconds since the epoch (strftime(3) format string is %s). precision can be one of s (for seconds), ms (milliseconds), us (microseconds), or ns (nanoseconds). Default arguments for the option are format:time,precision:s. -t --absolute-timestamps Prefix each line of the trace with the wall clock time. -tt --absolute-timestamps=precision:us If given twice, the time printed will include the microseconds. -ttt --absolute-timestamps=format:unix,precision:us If given thrice, the time printed will include the microseconds and the leading portion will be printed as the number of seconds since the epoch. -T --syscall-times[=precision] Show the time spent in system calls. This records the time difference between the beginning and the end of each system call. precision can be one of s (for seconds), ms (milliseconds), us (microseconds), or ns (nanoseconds), and allows setting the precision of time value being printed. Default is us (microseconds). -v --no-abbrev Print unabbreviated versions of environment, stat, termios, etc. calls. These structures are very common in calls and so the default behavior displays a reasonable subset of structure members. Use this option to get all of the gory details. --strings-in-hex[=option] Control usage of escape sequences with hexadecimal numbers in the printed strings. Normally (when no --strings-in-hex or -x option is supplied), escape sequences are used to print non-printable and non-ASCII characters (that is, characters with a character code less than 32 or greater than 127), or to disambiguate the output (so, for quotes and other characters that encase the printed string, for example, angle brackets, in case of file descriptor path output); for the former use case, unless it is a white space character that has a symbolic escape sequence defined in the C standard (that is, “\t” for a horizontal tab, “\n” for a newline, “\v” for a vertical tab, “\f” for a form feed page break, and “\r” for a carriage return) are printed using escape sequences with numbers that correspond to their byte values, with octal number format being the default. option can be one of the following: none Hexadecimal numbers are not used in the output at all. When there is a need to emit an escape sequence, octal numbers are used. non-ascii-chars Hexadecimal numbers are used instead of octal in the escape sequences. non-ascii Strings that contain non-ASCII characters are printed using escape sequences with hexadecimal numbers. all All strings are printed using escape sequences with hexadecimal numbers. When the option is supplied without an argument, all is assumed. -x --strings-in-hex=non-ascii Print all non-ASCII strings in hexadecimal string format. -xx --strings-in-hex[=all] Print all strings in hexadecimal string format. -X format --const-print-style=format Set the format for printing of named constants and flags. Supported format values are: raw Raw number output, without decoding. abbrev Output a named constant or a set of flags instead of the raw number if they are found. This is the default strace behaviour. verbose Output both the raw value and the decoded string (as a comment). -y --decode-fds --decode-fds=path Print paths associated with file descriptor arguments and with the AT_FDCWD constant. -yy --decode-fds=all Print all available information associated with file descriptors: protocol-specific information associated with socket file descriptors, block/character device number associated with device file descriptors, and PIDs associated with pidfd file descriptors. --pidns-translation --decode-pids=pidns If strace and tracee are in different PID namespaces, print PIDs in strace's namespace, too. -Y --decode-pids=comm Print command names for PIDs. --secontext[=format] -e secontext=format When SELinux is available and is not disabled, print in square brackets SELinux contexts of processes, files, and descriptors. The format argument is a comma-separated list of items being one of the following: full Print the full context (user, role, type level and category). mismatch Also print the context recorded by the SELinux database in case the current context differs. The latter is printed after two exclamation marks (!!). The default value for --secontext is !full,mismatch which prints only the type instead of full context and doesn't check for context mismatches. Statistics -c --summary-only Count time, calls, and errors for each system call and report a summary on program exit, suppressing the regular output. This attempts to show system time (CPU time spent running in the kernel) independent of wall clock time. If -c is used with -f, only aggregate totals for all traced processes are kept. -C --summary Like -c but also print regular output while processes are running. -O overhead --summary-syscall-overhead=overhead Set the overhead for tracing system calls to overhead. This is useful for overriding the default heuristic for guessing how much time is spent in mere measuring when timing system calls using the -c option. The accuracy of the heuristic can be gauged by timing a given program run without tracing (using time(1)) and comparing the accumulated system call time to the total produced using -c. The format of overhead specification is described in section Time specification format description. -S sortby --summary-sort-by=sortby Sort the output of the histogram printed by the -c option by the specified criterion. Legal values are time (or time-percent or time-total or total-time), min-time (or shortest or time-min), max-time (or longest or time-max), avg-time (or time-avg), calls (or count), errors (or error), name (or syscall or syscall-name), and nothing (or none); default is time. -U columns --summary-columns=columns Configure a set (and order) of columns being shown in the call summary. The columns argument is a comma-separated list with items being one of the following: time-percent (or time) Percentage of cumulative time consumed by a specific system call. total-time (or time-total) Total system (or wall clock, if -w option is provided) time consumed by a specific system call. min-time (or shortest or time-min) Minimum observed call duration. max-time (or longest or time-max) Maximum observed call duration. avg-time (or time-avg) Average call duration. calls (or count) Call count. errors (or error) Error count. name (or syscall or syscall-name) Syscall name. The default value is time-percent,total-time,avg-time,calls,errors,name. If the name field is not supplied explicitly, it is added as the last column. -w --summary-wall-clock Summarise the time difference between the beginning and end of each system call. The default is to summarise the system time. Tampering -e inject=syscall_set[:error=errno|:retval=value][:signal=sig] [:syscall=syscall][:delay_enter=delay][:delay_exit=delay] [:poke_enter=@argN=DATAN,@argM=DATAM...] [:poke_exit=@argN=DATAN,@argM=DATAM...][:when=expr] --inject=syscall_set[:error=errno|:retval=value][:signal=sig] [:syscall=syscall][:delay_enter=delay][:delay_exit=delay] [:poke_enter=@argN=DATAN,@argM=DATAM...] [:poke_exit=@argN=DATAN,@argM=DATAM...][:when=expr] Perform syscall tampering for the specified set of syscalls. The syntax of the syscall_set specification is the same as in the -e trace option. At least one of error, retval, signal, delay_enter, delay_exit, poke_enter, or poke_exit options has to be specified. error and retval are mutually exclusive. If :error=errno option is specified, a fault is injected into a syscall invocation: the syscall number is replaced by -1 which corresponds to an invalid syscall (unless a syscall is specified with :syscall= option), and the error code is specified using a symbolic errno value like ENOSYS or a numeric value within 1..4095 range. If :retval=value option is specified, success injection is performed: the syscall number is replaced by -1, but a bogus success value is returned to the callee. If :signal=sig option is specified with either a symbolic value like SIGSEGV or a numeric value within 1..SIGRTMAX range, that signal is delivered on entering every syscall specified by the set. If :delay_enter=delay or :delay_exit=delay options are specified, delay injection is performed: the tracee is delayed by time period specified by delay on entering or exiting the syscall, respectively. The format of delay specification is described in section Time specification format description. If :poke_enter=@argN=DATAN,@argM=DATAM... or :poke_exit=@argN=DATAN,@argM=DATAM... options are specified, tracee's memory at locations, pointed to by system call arguments argN and argM (going from arg1 to arg7) is overwritten by data DATAN and DATAM (specified in hexadecimal format; for example :poke_enter=@arg1=0000DEAD0000BEEF). :poke_enter modifies memory on syscall enter, and :poke_exit - on exit. If :signal=sig option is specified without :error=errno, :retval=value or :delay_{enter,exit}=usecs options, then only a signal sig is delivered without a syscall fault or delay injection. Conversely, :error=errno or :retval=value option without :delay_enter=delay, :delay_exit=delay or :signal=sig options injects a fault without delivering a signal or injecting a delay, etc. If :signal=sig option is specified together with :error=errno or :retval=value, then both injection of a fault or success and signal delivery are performed. if :syscall=syscall option is specified, the corresponding syscall with no side effects is injected instead of -1. Currently, only "pure" (see -e trace=%pure description) syscalls can be specified there. Unless a :when=expr subexpression is specified, an injection is being made into every invocation of each syscall from the set. The format of the subexpression is: first[..last][+[step]] Number first stands for the first invocation number in the range, number last stands for the last invocation number in the range, and step stands for the step between two consecutive invocations. The following combinations are useful: first For every syscall from the set, perform an injection for the syscall invocation number first only. first..last For every syscall from the set, perform an injection for the syscall invocation number first and all subsequent invocations until the invocation number last (inclusive). first+ For every syscall from the set, perform injections for the syscall invocation number first and all subsequent invocations. first..last+ For every syscall from the set, perform injections for the syscall invocation number first and all subsequent invocations until the invocation number last (inclusive). first+step For every syscall from the set, perform injections for syscall invocations number first, first+step, first+step+step, and so on. first..last+step Same as the previous, but consider only syscall invocations with numbers up to last (inclusive). For example, to fail each third and subsequent chdir syscalls with ENOENT, use -e inject=chdir:error=ENOENT:when=3+. The valid range for numbers first and step is 1..65535, and for number last is 1..65534. An injection expression can contain only one error= or retval= specification, and only one signal= specification. If an injection expression contains multiple when= specifications, the last one takes precedence. Accounting of syscalls that are subject to injection is done per syscall and per tracee. Specification of syscall injection can be combined with other syscall filtering options, for example, -P /dev/urandom -e inject=file:error=ENOENT. -e fault=syscall_set[:error=errno][:when=expr] --fault=syscall_set[:error=errno][:when=expr] Perform syscall fault injection for the specified set of syscalls. This is equivalent to more generic -e inject= expression with default value of errno option set to ENOSYS. Miscellaneous -d --debug Show some debugging output of strace itself on the standard error. -F This option is deprecated. It is retained for backward compatibility only and may be removed in future releases. Usage of multiple instances of -F option is still equivalent to a single -f, and it is ignored at all if used along with one or more instances of -f option. -h --help Print the help summary. --seccomp-bpf Try to enable use of seccomp-bpf (see seccomp(2)) to have ptrace(2)-stops only when system calls that are being traced occur in the traced processes. This option has no effect unless -f/--follow-forks is also specified. --seccomp-bpf is not compatible with --syscall-limit and -b/--detach-on options. It is also not applicable to processes attached using -p/--attach option. An attempt to enable system calls filtering using seccomp-bpf may fail for various reasons, e.g. there are too many system calls to filter, the seccomp API is not available, or strace itself is being traced. In cases when seccomp-bpf filter setup failed, strace proceeds as usual and stops traced processes on every system call. --tips[=[[id:]id],[[format:]format]] Show strace tips, tricks, and tweaks before exit. id can be a non-negative integer number, which enables printing of specific tip, trick, or tweak (these ID are not guaranteed to be stable), or random (the default), in which case a random tip is printed. format can be one of the following: none No tip is printed. Can be used to override the previous setting. compact Print the tip just big enough to contain all the text. full Print the tip in its full glory. Default is id:random,format:compact. -V --version Print the version number of strace. Multiple instances of the option beyond specific threshold tend to increase Strauss awareness. Time specification format description Time values can be specified as a decimal floating point number (in a format accepted by strtod(3)), optionally followed by one of the following suffices that specify the unit of time: s (seconds), ms (milliseconds), us (microseconds), or ns (nanoseconds). If no suffix is specified, the value is interpreted as microseconds. The described format is used for -O, -e inject=delay_enter, and -e inject=delay_exit options. | # strace
> Troubleshooting tool for tracing system calls. More information:
> https://manned.org/strace.
* Start tracing a specific process by its PID:
`strace -p {{pid}}`
* Trace a process and filter output by system call:
`strace -p {{pid}} -e {{system_call_name}}`
* Count time, calls, and errors for each system call and report a summary on program exit:
`strace -p {{pid}} -c`
* Show the time spent in every system call:
`strace -p {{pid}} -T`
* Start tracing a program by executing it:
`strace {{program}}`
* Start tracing file operations of a program:
`strace -e trace=file {{program}}` |
cmp | Compare two files byte by byte. The optional SKIP1 and SKIP2 specify the number of bytes to skip at the beginning of each file (zero by default). Mandatory arguments to long options are mandatory for short options too. -b, --print-bytes print differing bytes -i, --ignore-initial=SKIP skip first SKIP bytes of both inputs -i, --ignore-initial=SKIP1:SKIP2 skip first SKIP1 bytes of FILE1 and first SKIP2 bytes of FILE2 -l, --verbose output byte numbers and differing byte values -n, --bytes=LIMIT compare at most LIMIT bytes -s, --quiet, --silent suppress all normal output --help display this help and exit -v, --version output version information and exit SKIP values may be followed by the following multiplicative suffixes: kB 1000, K 1024, MB 1,000,000, M 1,048,576, GB 1,000,000,000, G 1,073,741,824, and so on for T, P, E, Z, Y. If a FILE is '-' or missing, read standard input. Exit status is 0 if inputs are the same, 1 if different, 2 if trouble. | # cmp
> Compare two files byte by byte. More information:
> https://www.gnu.org/software/diffutils/manual/html_node/Invoking-cmp.html.
* Output char and line number of the first difference between two files:
`cmp {{path/to/file1}} {{path/to/file2}}`
* Output info of the first difference: char, line number, bytes, and values:
`cmp --print-bytes {{path/to/file1}} {{path/to/file2}}`
* Output the byte numbers and values of every difference:
`cmp --verbose {{path/to/file1}} {{path/to/file2}}`
* Compare files but output nothing, yield only the exit status:
`cmp --quiet {{path/to/file1}} {{path/to/file2}}` |
chmod | The chmod utility shall change any or all of the file mode bits of the file named by each file operand in the way specified by the mode operand. It is implementation-defined whether and how the chmod utility affects any alternate or additional file access control mechanism (see the Base Definitions volume of POSIX.1‐2017, Section 4.5, File Access Permissions) being used for the specified file. Only a process whose effective user ID matches the user ID of the file, or a process with appropriate privileges, shall be permitted to change the file mode bits of a file. Upon successfully changing the file mode bits of a file, the chmod utility shall mark for update the last file status change timestamp of the file. The chmod utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following option shall be supported: -R Recursively change file mode bits. For each file operand that names a directory, chmod shall change the file mode bits of the directory and all files in the file hierarchy below it. | # chmod
> Change the access permissions of a file or directory. More information:
> https://www.gnu.org/software/coreutils/chmod.
* Give the [u]ser who owns a file the right to e[x]ecute it:
`chmod u+x {{path/to/file}}`
* Give the [u]ser rights to [r]ead and [w]rite to a file/directory:
`chmod u+rw {{path/to/file_or_directory}}`
* Remove e[x]ecutable rights from the [g]roup:
`chmod g-x {{path/to/file}}`
* Give [a]ll users rights to [r]ead and e[x]ecute:
`chmod a+rx {{path/to/file}}`
* Give [o]thers (not in the file owner's group) the same rights as the [g]roup:
`chmod o=g {{path/to/file}}`
* Remove all rights from [o]thers:
`chmod o= {{path/to/file}}`
* Change permissions recursively giving [g]roup and [o]thers the ability to [w]rite:
`chmod -R g+w,o+w {{path/to/directory}}`
* Recursively give [a]ll users [r]ead permissions to files and e[X]ecute permissions to sub-directories within a directory:
`chmod -R a+rX {{path/to/directory}}` |
chsh | The chsh command changes the user login shell. This determines the name of the user's initial login command. A normal user may only change the login shell for her own account; the superuser may change the login shell for any account. The options which apply to the chsh command are: -h, --help Display help message and exit. -R, --root CHROOT_DIR Apply changes in the CHROOT_DIR directory and use the configuration files from the CHROOT_DIR directory. Only absolute paths are supported. -s, --shell SHELL The name of the user's new login shell. Setting this field to blank causes the system to select the default login shell. If the -s option is not selected, chsh operates in an interactive fashion, prompting the user with the current login shell. Enter the new value to change the shell, or leave the line blank to use the current one. The current shell is displayed between a pair of [ ] marks. | # chsh
> Change user's login shell. More information: https://manned.org/chsh.
* Set a specific login shell for the current user interactively:
`chsh`
* Set a specific login [s]hell for the current user:
`chsh -s {{path/to/shell}}`
* Set a login [s]hell for a specific user:
`chsh -s {{path/to/shell}} {{username}}`
* [l]ist available shells:
`chsh -l` |
coredumpctl | coredumpctl is a tool that can be used to retrieve and process core dumps and metadata which were saved by systemd-coredump(8). The following options are understood: -h, --help Print a short help text and exit. --version Print a short version string and exit. --no-pager Do not pipe output into a pager. --no-legend Do not print the legend, i.e. column headers and the footer with hints. --json=MODE Shows output formatted as JSON. Expects one of "short" (for the shortest possible output without any redundant whitespace or line breaks), "pretty" (for a pretty version of the same, with indentation and line breaks) or "off" (to turn off JSON output, the default). -1 Show information of the most recent core dump only, instead of listing all known core dumps. Equivalent to --reverse -n 1. -n INT Show at most the specified number of entries. The specified parameter must be an integer greater or equal to 1. -S, --since Only print entries which are since the specified date. -U, --until Only print entries which are until the specified date. -r, --reverse Reverse output so that the newest entries are displayed first. -F FIELD, --field=FIELD Print all possible data values the specified field takes in matching core dump entries of the journal. -o FILE, --output=FILE Write the core to FILE. --debugger=DEBUGGER Use the given debugger for the debug command. If not given and $SYSTEMD_DEBUGGER is unset, then gdb(1) will be used. -A ARGS, --debugger-arguments=ARGS Pass the given ARGS as extra command line arguments to the debugger. Quote as appropriate when ARGS contain whitespace. (See Examples.) --file=GLOB Takes a file glob as an argument. If specified, coredumpctl will operate on the specified journal files matching GLOB instead of the default runtime and system journal paths. May be specified multiple times, in which case files will be suitably interleaved. -D DIR, --directory=DIR Use the journal files in the specified DIR. --root=ROOT Use root directory ROOT when searching for coredumps. --image=image Takes a path to a disk image file or block device node. If specified, all operations are applied to file system in the indicated disk image. This option is similar to --root=, but operates on file systems stored in disk images or block devices. The disk image should either contain just a file system or a set of file systems within a GPT partition table, following the Discoverable Partitions Specification[1]. For further information on supported disk images, see systemd-nspawn(1)'s switch of the same name. --image-policy=policy Takes an image policy string as argument, as per systemd.image-policy(7). The policy is enforced when operating on the disk image specified via --image=, see above. If not specified defaults to the "*" policy, i.e. all recognized file systems in the image are used. -q, --quiet Suppresses informational messages about lack of access to journal files and possible in-flight coredumps. --all Look at all available journal files in /var/log/journal/ (excluding journal namespaces) instead of only local ones. | # coredumpctl
> Retrieve and process saved core dumps and metadata. More information:
> https://www.freedesktop.org/software/systemd/man/coredumpctl.html.
* List all captured core dumps:
`coredumpctl list`
* List captured core dumps for a program:
`coredumpctl list {{program}}`
* Show information about the core dumps matching a program with `PID`:
`coredumpctl info {{PID}}`
* Invoke debugger using the last core dump of a program:
`coredumpctl debug {{program}}`
* Extract the last core dump of a program to a file:
`coredumpctl --output={{path/to/file}} dump {{program}}` |
git-check-mailmap | For each “Name <user@host>” or “<user@host>” from the command-line or standard input (when using --stdin), look up the person’s canonical name and email address (see "Mapping Authors" below). If found, print them; otherwise print the input as-is. --stdin Read contacts, one per line, from the standard input after exhausting contacts provided on the command-line. | # git check-mailmap
> Show canonical names and email addresses of contacts. More information:
> https://git-scm.com/docs/git-check-mailmap.
* Look up the canonical name associated with an email address:
`git check-mailmap "<{{email@example.com}}>"` |
top | The top program provides a dynamic real-time view of a running system. It can display system summary information as well as a list of processes or threads currently being managed by the Linux kernel. The types of system summary information shown and the types, order and size of information displayed for processes are all user configurable and that configuration can be made persistent across restarts. The program provides a limited interactive interface for process manipulation as well as a much more extensive interface for personal configuration -- encompassing every aspect of its operation. And while top is referred to throughout this document, you are free to name the program anything you wish. That new name, possibly an alias, will then be reflected on top's display and used when reading and writing a configuration file. | # top
> Display dynamic real-time information about running processes. More
> information: https://ss64.com/osx/top.html.
* Start `top`, all options are available in the interface:
`top`
* Start `top` sorting processes by internal memory size (default order - process ID):
`top -o mem`
* Start `top` sorting processes first by CPU, then by running time:
`top -o cpu -O time`
* Start `top` displaying only processes owned by given user:
`top -user {{user_name}}`
* Get help about interactive commands:
`?` |
unshare | The unshare command creates new namespaces (as specified by the command-line options described below) and then executes the specified program. If program is not given, then "${SHELL}" is run (default: /bin/sh). By default, a new namespace persists only as long as it has member processes. A new namespace can be made persistent even when it has no member processes by bind mounting /proc/pid/ns/type files to a filesystem path. A namespace that has been made persistent in this way can subsequently be entered with nsenter(1) even after the program terminates (except PID namespaces where a permanently running init process is required). Once a persistent namespace is no longer needed, it can be unpersisted by using umount(8) to remove the bind mount. See the EXAMPLES section for more details. unshare since util-linux version 2.36 uses /proc/[pid]/ns/pid_for_children and /proc/[pid]/ns/time_for_children files for persistent PID and TIME namespaces. This change requires Linux kernel 4.17 or newer. The following types of namespaces can be created with unshare: mount namespace Mounting and unmounting filesystems will not affect the rest of the system, except for filesystems which are explicitly marked as shared (with mount --make-shared; see /proc/self/mountinfo or findmnt -o+PROPAGATION for the shared flags). For further details, see mount_namespaces(7). unshare since util-linux version 2.27 automatically sets propagation to private in a new mount namespace to make sure that the new namespace is really unshared. It’s possible to disable this feature with option --propagation unchanged. Note that private is the kernel default. UTS namespace Setting hostname or domainname will not affect the rest of the system. For further details, see uts_namespaces(7). IPC namespace The process will have an independent namespace for POSIX message queues as well as System V message queues, semaphore sets and shared memory segments. For further details, see ipc_namespaces(7). network namespace The process will have independent IPv4 and IPv6 stacks, IP routing tables, firewall rules, the /proc/net and /sys/class/net directory trees, sockets, etc. For further details, see network_namespaces(7). PID namespace Children will have a distinct set of PID-to-process mappings from their parent. For further details, see pid_namespaces(7). cgroup namespace The process will have a virtualized view of /proc/self/cgroup, and new cgroup mounts will be rooted at the namespace cgroup root. For further details, see cgroup_namespaces(7). user namespace The process will have a distinct set of UIDs, GIDs and capabilities. For further details, see user_namespaces(7). time namespace The process can have a distinct view of CLOCK_MONOTONIC and/or CLOCK_BOOTTIME which can be changed using /proc/self/timens_offsets. For further details, see time_namespaces(7). -i, --ipc[=file] Create a new IPC namespace. If file is specified, then the namespace is made persistent by creating a bind mount at file. -m, --mount[=file] Create a new mount namespace. If file is specified, then the namespace is made persistent by creating a bind mount at file. Note that file must be located on a mount whose propagation type is not shared (or an error results). Use the command findmnt -o+PROPAGATION when not sure about the current setting. See also the examples below. -n, --net[=file] Create a new network namespace. If file is specified, then the namespace is made persistent by creating a bind mount at file. -p, --pid[=file] Create a new PID namespace. If file is specified, then the namespace is made persistent by creating a bind mount at file. (Creation of a persistent PID namespace will fail if the --fork option is not also specified.) See also the --fork and --mount-proc options. -u, --uts[=file] Create a new UTS namespace. If file is specified, then the namespace is made persistent by creating a bind mount at file. -U, --user[=file] Create a new user namespace. If file is specified, then the namespace is made persistent by creating a bind mount at file. -C, --cgroup[=file] Create a new cgroup namespace. If file is specified, then the namespace is made persistent by creating a bind mount at file. -T, --time[=file] Create a new time namespace. If file is specified, then the namespace is made persistent by creating a bind mount at file. The --monotonic and --boottime options can be used to specify the corresponding offset in the time namespace. -f, --fork Fork the specified program as a child process of unshare rather than running it directly. This is useful when creating a new PID namespace. Note that when unshare is waiting for the child process, then it ignores SIGINT and SIGTERM and does not forward any signals to the child. It is necessary to send signals to the child process. --keep-caps When the --user option is given, ensure that capabilities granted in the user namespace are preserved in the child process. --kill-child[=signame] When unshare terminates, have signame be sent to the forked child process. Combined with --pid this allows for an easy and reliable killing of the entire process tree below unshare. If not given, signame defaults to SIGKILL. This option implies --fork. --mount-proc[=mountpoint] Just before running the program, mount the proc filesystem at mountpoint (default is /proc). This is useful when creating a new PID namespace. It also implies creating a new mount namespace since the /proc mount would otherwise mess up existing programs on the system. The new proc filesystem is explicitly mounted as private (with MS_PRIVATE|MS_REC). --map-user=uid|name Run the program only after the current effective user ID has been mapped to uid. If this option is specified multiple times, the last occurrence takes precedence. This option implies --user. --map-users=inneruid:outeruid:count|auto Run the program only after the block of user IDs of size count beginning at outeruid has been mapped to the block of user IDs beginning at inneruid. This mapping is created with newuidmap(1). If the range of user IDs overlaps with the mapping specified by --map-user, then a "hole" will be removed from the mapping. This may result in the highest user ID of the mapping not being mapped. The special value auto will map the first block of user IDs owned by the effective user from /etc/subuid to a block starting at user ID 0. If this option is specified multiple times, the last occurrence takes precedence. This option implies --user. Before util-linux version 2.39, this option expected a comma-separated argument of the form outeruid,inneruid,count but that format is now deprecated for consistency with the ordering used in /proc/[pid]/uid_map and the X-mount.idmap mount option. --map-group=gid|name Run the program only after the current effective group ID has been mapped to gid. If this option is specified multiple times, the last occurrence takes precedence. This option implies --setgroups=deny and --user. --map-groups=innergid:outergid:count|auto Run the program only after the block of group IDs of size count beginning at outergid has been mapped to the block of group IDs beginning at innergid. This mapping is created with newgidmap(1). If the range of group IDs overlaps with the mapping specified by --map-group, then a "hole" will be removed from the mapping. This may result in the highest group ID of the mapping not being mapped. The special value auto will map the first block of user IDs owned by the effective user from /etc/subgid to a block starting at group ID 0. If this option is specified multiple times, the last occurrence takes precedence. This option implies --user. Before util-linux version 2.39, this option expected a comma-separated argument of the form outergid,innergid,count but that format is now deprecated for consistency with the ordering used in /proc/[pid]/gid_map and the X-mount.idmap mount option. --map-auto Map the first block of user IDs owned by the effective user from /etc/subuid to a block starting at user ID 0. In the same manner, also map the first block of group IDs owned by the effective group from /etc/subgid to a block starting at group ID 0. This option is intended to handle the common case where the first block of subordinate user and group IDs can map the whole user and group ID space. This option is equivalent to specifying --map-users=auto and --map-groups=auto. -r, --map-root-user Run the program only after the current effective user and group IDs have been mapped to the superuser UID and GID in the newly created user namespace. This makes it possible to conveniently gain capabilities needed to manage various aspects of the newly created namespaces (such as configuring interfaces in the network namespace or mounting filesystems in the mount namespace) even when run unprivileged. As a mere convenience feature, it does not support more sophisticated use cases, such as mapping multiple ranges of UIDs and GIDs. This option implies --setgroups=deny and --user. This option is equivalent to --map-user=0 --map-group=0. -c, --map-current-user Run the program only after the current effective user and group IDs have been mapped to the same UID and GID in the newly created user namespace. This option implies --setgroups=deny and --user. This option is equivalent to --map-user=$(id -ru) --map-group=$(id -rg). --propagation private|shared|slave|unchanged Recursively set the mount propagation flag in the new mount namespace. The default is to set the propagation to private. It is possible to disable this feature with the argument unchanged. The option is silently ignored when the mount namespace (--mount) is not requested. --setgroups allow|deny Allow or deny the setgroups(2) system call in a user namespace. To be able to call setgroups(2), the calling process must at least have CAP_SETGID. But since Linux 3.19 a further restriction applies: the kernel gives permission to call setgroups(2) only after the GID map (/proc/pid*/gid_map*) has been set. The GID map is writable by root when setgroups(2) is enabled (i.e., allow, the default), and the GID map becomes writable by unprivileged processes when setgroups(2) is permanently disabled (with deny). -R, --root=dir run the command with root directory set to dir. -w, --wd=dir change working directory to dir. -S, --setuid uid Set the user ID which will be used in the entered namespace. -G, --setgid gid Set the group ID which will be used in the entered namespace and drop supplementary groups. --monotonic offset Set the offset of CLOCK_MONOTONIC which will be used in the entered time namespace. This option requires unsharing a time namespace with --time. --boottime offset Set the offset of CLOCK_BOOTTIME which will be used in the entered time namespace. This option requires unsharing a time namespace with --time. -h, --help Display help text and exit. -V, --version Print version and exit. | # unshare
> Execute a command in new user-defined namespaces. More information:
> https://www.kernel.org/doc/html/latest/userspace-api/unshare.html.
* Execute a command without sharing access to connected networks:
`unshare --net {{command}} {{command_arguments}}`
* Execute a command as a child process without sharing mounts, processes, or networks:
`unshare --mount --pid --net --fork {{command}} {{command_arguments}}` |
git-switch | Switch to a specified branch. The working tree and the index are updated to match the branch. All new commits will be added to the tip of this branch. Optionally a new branch could be created with either -c, -C, automatically from a remote branch of same name (see --guess), or detach the working tree from any branch with --detach, along with switching. Switching branches does not require a clean index and working tree (i.e. no differences compared to HEAD). The operation is aborted however if the operation leads to loss of local changes, unless told otherwise with --discard-changes or --merge. THIS COMMAND IS EXPERIMENTAL. THE BEHAVIOR MAY CHANGE. <branch> Branch to switch to. <new-branch> Name for the new branch. <start-point> The starting point for the new branch. Specifying a <start-point> allows you to create a branch based on some other point in history than where HEAD currently points. (Or, in the case of --detach, allows you to inspect and detach from some other point.) You can use the @{-N} syntax to refer to the N-th last branch/commit switched to using "git switch" or "git checkout" operation. You may also specify - which is synonymous to @{-1}. This is often used to switch quickly between two branches, or to undo a branch switch by mistake. As a special case, you may use A...B as a shortcut for the merge base of A and B if there is exactly one merge base. You can leave out at most one of A and B, in which case it defaults to HEAD. -c <new-branch>, --create <new-branch> Create a new branch named <new-branch> starting at <start-point> before switching to the branch. This is a convenient shortcut for: $ git branch <new-branch> $ git switch <new-branch> -C <new-branch>, --force-create <new-branch> Similar to --create except that if <new-branch> already exists, it will be reset to <start-point>. This is a convenient shortcut for: $ git branch -f <new-branch> $ git switch <new-branch> -d, --detach Switch to a commit for inspection and discardable experiments. See the "DETACHED HEAD" section in git-checkout(1) for details. --guess, --no-guess If <branch> is not found but there does exist a tracking branch in exactly one remote (call it <remote>) with a matching name, treat as equivalent to $ git switch -c <branch> --track <remote>/<branch> If the branch exists in multiple remotes and one of them is named by the checkout.defaultRemote configuration variable, we’ll use that one for the purposes of disambiguation, even if the <branch> isn’t unique across all remotes. Set it to e.g. checkout.defaultRemote=origin to always checkout remote branches from there if <branch> is ambiguous but exists on the origin remote. See also checkout.defaultRemote in git-config(1). --guess is the default behavior. Use --no-guess to disable it. The default behavior can be set via the checkout.guess configuration variable. -f, --force An alias for --discard-changes. --discard-changes Proceed even if the index or the working tree differs from HEAD. Both the index and working tree are restored to match the switching target. If --recurse-submodules is specified, submodule content is also restored to match the switching target. This is used to throw away local changes. -m, --merge If you have local modifications to one or more files that are different between the current branch and the branch to which you are switching, the command refuses to switch branches in order to preserve your modifications in context. However, with this option, a three-way merge between the current branch, your working tree contents, and the new branch is done, and you will be on the new branch. When a merge conflict happens, the index entries for conflicting paths are left unmerged, and you need to resolve the conflicts and mark the resolved paths with git add (or git rm if the merge should result in deletion of the path). --conflict=<style> The same as --merge option above, but changes the way the conflicting hunks are presented, overriding the merge.conflictStyle configuration variable. Possible values are "merge" (default), "diff3", and "zdiff3". -q, --quiet Quiet, suppress feedback messages. --progress, --no-progress Progress status is reported on the standard error stream by default when it is attached to a terminal, unless --quiet is specified. This flag enables progress reporting even if not attached to a terminal, regardless of --quiet. -t, --track [direct|inherit] When creating a new branch, set up "upstream" configuration. -c is implied. See --track in git-branch(1) for details. If no -c option is given, the name of the new branch will be derived from the remote-tracking branch, by looking at the local part of the refspec configured for the corresponding remote, and then stripping the initial part up to the "*". This would tell us to use hack as the local branch when branching off of origin/hack (or remotes/origin/hack, or even refs/remotes/origin/hack). If the given name has no slash, or the above guessing results in an empty name, the guessing is aborted. You can explicitly give a name with -c in such a case. --no-track Do not set up "upstream" configuration, even if the branch.autoSetupMerge configuration variable is true. --orphan <new-branch> Create a new orphan branch, named <new-branch>. All tracked files are removed. --ignore-other-worktrees git switch refuses when the wanted ref is already checked out by another worktree. This option makes it check the ref out anyway. In other words, the ref can be held by more than one worktree. --recurse-submodules, --no-recurse-submodules Using --recurse-submodules will update the content of all active submodules according to the commit recorded in the superproject. If nothing (or --no-recurse-submodules) is used, submodules working trees will not be updated. Just like git-submodule(1), this will detach HEAD of the submodules. | # git switch
> Switch between Git branches. Requires Git version 2.23+. See also `git
> checkout`. More information: https://git-scm.com/docs/git-switch.
* Switch to an existing branch:
`git switch {{branch_name}}`
* Create a new branch and switch to it:
`git switch --create {{branch_name}}`
* Create a new branch based on an existing commit and switch to it:
`git switch --create {{branch_name}} {{commit}}`
* Switch to the previous branch:
`git switch -`
* Switch to a branch and update all submodules to match:
`git switch --recurse-submodules {{branch_name}}`
* Switch to a branch and automatically merge the current branch and any uncommitted changes into it:
`git switch --merge {{branch_name}}` |
dpkg | dpkg is a medium-level tool to install, build, remove and manage Debian packages. The primary and more user-friendly front-end for dpkg as a CLI (command-line interface) is apt(8) and as a TUI (terminal user interface) is aptitude(8). dpkg itself is controlled entirely via command line parameters, which consist of exactly one action and zero or more options. The action-parameter tells dpkg what to do and options control the behavior of the action in some way. dpkg can also be used as a front-end to dpkg-deb(1) and dpkg-query(1). The list of supported actions can be found later on in the ACTIONS section. If any such action is encountered dpkg just runs dpkg-deb or dpkg-query with the parameters given to it, but no specific options are currently passed to them, to use any such option the back-ends need to be called directly. All options can be specified both on the command line and in the dpkg configuration file /usr/local/etc/dpkg/dpkg.cfg or fragment files (with names matching this shell pattern '[0-9a-zA-Z_-]*') on the configuration directory /usr/local/etc/dpkg/dpkg.cfg.d/. Each line in the configuration file is either an option (exactly the same as the command line option but without leading hyphens) or a comment (if it starts with a ‘#’). --abort-after=number Change after how many errors dpkg will abort. The default is 50. -B, --auto-deconfigure When a package is removed, there is a possibility that another installed package depended on the removed package. Specifying this option will cause automatic deconfiguration of the package which depended on the removed package. -Doctal, --debug=octal Switch debugging on. octal is formed by bitwise-ORing desired values together from the list below (note that these values may change in future releases). -Dh or --debug=help display these debugging values. Number Description 1 Generally helpful progress information 2 Invocation and status of maintainer scripts 10 Output for each file processed 100 Lots of output for each file processed 20 Output for each configuration file 200 Lots of output for each configuration file 40 Dependencies and conflicts 400 Lots of dependencies/conflicts output 10000 Trigger activation and processing 20000 Lots of output regarding triggers 40000 Silly amounts of output regarding triggers 1000 Lots of drivel about for example the dpkg/info dir 2000 Insane amounts of drivel --force-things --no-force-things, --refuse-things Force or refuse (no-force and refuse mean the same thing) to do some things. things is a comma separated list of things specified below. --force-help displays a message describing them. Things marked with (*) are forced by default. Warning: These options are mostly intended to be used by experts only. Using them without fully understanding their effects may break your whole system. all: Turns on (or off) all force options. downgrade(*): Install a package, even if newer version of it is already installed. Warning: At present dpkg does not do any dependency checking on downgrades and therefore will not warn you if the downgrade breaks the dependency of some other package. This can have serious side effects, downgrading essential system components can even make your whole system unusable. Use with care. configure-any: Configure also any unpacked but unconfigured packages on which the current package depends. hold: Allow automatic installs, upgrades or removals of packages even when marked to be on “hold”. Note: When these actions are requested explicitly, the “hold” package selection state always gets ignored. remove-reinstreq: Remove a package, even if it's broken and marked to require reinstallation. This may, for example, cause parts of the package to remain on the system, which will then be forgotten by dpkg. remove-protected: Remove, even if the package is considered protected (since dpkg 1.20.1). Protected packages contain mostly important system boot infrastructure or are used for custom system- local meta-packages. Removing them might cause the whole system to be unable to boot or lose required functionality to operate, so use with caution. remove-essential: Remove, even if the package is considered essential. Essential packages contain mostly very basic Unix commands, required for the packaging system, for the operation of the system in general or during boot (although the latter should be converted to protected packages instead). Removing them might cause the whole system to stop working, so use with caution. depends: Turn all dependency problems into warnings. This affects the Pre-Depends and Depends fields. depends-version: Don't care about versions when checking dependencies. This affects the Pre-Depends and Depends fields. breaks: Install, even if this would break another package (since dpkg 1.14.6). This affects the Breaks field. conflicts: Install, even if it conflicts with another package. This is dangerous, for it will usually cause overwriting of some files. This affects the Conflicts field. confmiss: Always install the missing conffile without prompting. This is dangerous, since it means not preserving a change (removing) made to the file. confnew: If a conffile has been modified and the version in the package did change, always install the new version without prompting, unless the --force-confdef is also specified, in which case the default action is preferred. confold: If a conffile has been modified and the version in the package did change, always keep the old version without prompting, unless the --force-confdef is also specified, in which case the default action is preferred. confdef: If a conffile has been modified and the version in the package did change, always choose the default action without prompting. If there is no default action it will stop to ask the user unless --force-confnew or --force-confold is also been given, in which case it will use that to decide the final action. confask: If a conffile has been modified always offer to replace it with the version in the package, even if the version in the package did not change (since dpkg 1.15.8). If any of --force-confnew, --force-confold, or --force-confdef is also given, it will be used to decide the final action. overwrite: Overwrite one package's file with another's file. overwrite-dir: Overwrite one package's directory with another's file. overwrite-diverted: Overwrite a diverted file with an undiverted version. statoverride-add: Overwrite an existing stat override when adding it (since dpkg 1.19.5). statoverride-remove: Ignore a missing stat override when removing it (since dpkg 1.19.5). security-mac(*): Use platform-specific Mandatory Access Controls (MAC) based security when installing files into the filesystem (since dpkg 1.19.5). On Linux systems the implementation uses SELinux. unsafe-io: Do not perform safe I/O operations when unpacking (since dpkg 1.15.8.6). Currently this implies not performing file system syncs before file renames, which is known to cause substantial performance degradation on some file systems, unfortunately the ones that require the safe I/O on the first place due to their unreliable behaviour causing zero- length files on abrupt system crashes. Note: For ext4, the main offender, consider using instead the mount option nodelalloc, which will fix both the performance degradation and the data safety issues, the latter by making the file system not produce zero-length files on abrupt system crashes with any software not doing syncs before atomic renames. Warning: Using this option might improve performance at the cost of losing data, use with care. script-chrootless: Run maintainer scripts without chroot(2)ing into instdir even if the package does not support this mode of operation (since dpkg 1.18.5). Warning: This can destroy your host system, use with extreme care. architecture: Process even packages with wrong or no architecture. bad-version: Process even packages with wrong versions (since dpkg 1.16.1). bad-path: PATH is missing important programs, so problems are likely. not-root: Try to (de)install things even when not root. bad-verify: Install a package even if it fails authenticity check. --ignore-depends=package,... Ignore dependency-checking for specified packages (actually, checking is performed, but only warnings about conflicts are given, nothing else). This affects the Pre-Depends, Depends and Breaks fields. --no-act, --dry-run, --simulate Do everything which is supposed to be done, but don't write any changes. This is used to see what would happen with the specified action, without actually modifying anything. Be sure to give --no-act before the action-parameter, or you might end up with undesirable results. (e.g. dpkg --purge foo --no-act will first purge package “foo” and then try to purge package ”--no-act”, even though you probably expected it to actually do nothing). -R, --recursive Recursively handle all regular files matching pattern *.deb found at specified directories and all of its subdirectories. This can be used with -i, -A, --install, --unpack and --record-avail actions. -G Don't install a package if a newer version of the same package is already installed. This is an alias of --refuse-downgrade. --admindir=dir Set the administrative directory to directory. This directory contains many files that give information about status of installed or uninstalled packages, etc. Defaults to «/usr/local/var/lib/dpkg» if DPKG_ADMINDIR has not been set. --instdir=dir Set the installation directory, which refers to the directory where packages are to be installed. instdir is also the directory passed to chroot(2) before running package's installation scripts, which means that the scripts see instdir as a root directory. Defaults to «/». --root=dir Set the root directory to directory, which sets the installation directory to «dir» and the administrative directory to «dir/usr/local/var/lib/dpkg». -O, --selected-only Only process the packages that are selected for installation. The actual marking is done with dselect or by dpkg, when it handles packages. For example, when a package is removed, it will be marked selected for deinstallation. -E, --skip-same-version Don't install the package if the same version and architecture of the package is already installed. Since dpkg 1.21.10, the architecture is also taken into account, which makes it possible to cross-grade packages or install additional co-installable instances with the same version, but different architecture. --pre-invoke=command --post-invoke=command Set an invoke hook command to be run via “sh -c” before or after the dpkg run for the unpack, configure, install, triggers-only, remove, purge, add-architecture and remove- architecture dpkg actions (since dpkg 1.15.4; add- architecture and remove-architecture actions since dpkg 1.17.19). This option can be specified multiple times. The order the options are specified is preserved, with the ones from the configuration files taking precedence. The environment variable DPKG_HOOK_ACTION is set for the hooks to the current dpkg action. Note: Front-ends might call dpkg several times per invocation, which might run the hooks more times than expected. --path-exclude=glob-pattern --path-include=glob-pattern Set glob-pattern as a path filter, either by excluding or re- including previously excluded paths matching the specified patterns during install (since dpkg 1.15.8). Warning: Take into account that depending on the excluded paths you might completely break your system, use with caution. The glob patterns use the same wildcards used in the shell, were ‘*’ matches any sequence of characters, including the empty string and also ‘/’. For example, «/usr/*/READ*» matches «/usr/share/doc/package/README». As usual, ‘?’ matches any single character (again, including ‘/’). And ‘[’ starts a character class, which can contain a list of characters, ranges and complementations. See glob(7) for detailed information about globbing. Note: The current implementation might re-include more directories and symlinks than needed, in particular when there is a more specific re- inclusion, to be on the safe side and avoid possible unpack failures; future work might fix this. This can be used to remove all paths except some particular ones; a typical case is: --path-exclude=/usr/share/doc/* --path-include=/usr/share/doc/*/copyright to remove all documentation files except the copyright files. These two options can be specified multiple times, and interleaved with each other. Both are processed in the given order, with the last rule that matches a file name making the decision. The filters are applied when unpacking the binary packages, and as such only have knowledge of the type of object currently being filtered (e.g. a normal file or a directory) and have not visibility of what objects will come next. Because these filters have side effects (in contrast to find(1) filters), excluding an exact pathname that happens to be a directory object like /usr/share/doc will not have the desired result, and only that pathname will be excluded (which could be automatically reincluded if the code sees the need). Any subsequent files contained within that directory will fail to unpack. Hint: make sure the globs are not expanded by your shell. --verify-format format-name Sets the output format for the --verify command (since dpkg 1.17.2). The only currently supported output format is rpm, which consists of a line for every path that failed any check. These lines have the following format: missing [c] pathname [(error-message)] ??5?????? [c] pathname The first 9 characters are used to report the checks result, either a literal missing when the file is not present or its metadata cannot be fetched, or one of the following special characters that report the result for each check: ‘?’ Implies the check could not be done (lack of support, file permissions, etc). ‘.’ Implies the check passed. ‘A-Za-z0-9’ Implies a specific check failed. The following positions and alphanumeric characters are currently supported: 1 ‘?’ These checks are currently not supported, will always be ‘?’. 2 ‘M’ The file mode check failed (since dpkg 1.21.0). Because pathname metadata is currently not tracked, this check can only be partially emulated via a very simple heuristic for pathnames that have a known digest, which implies they should be regular files, where the check will fail if the pathname is not a regular file on the filesystem. This check will currently never succeed as it does not have enough information available. 3 ‘5’ The digest check failed, which means the file contents have changed. 4-9 ‘?’ These checks are currently not supported, will always be ‘?’. The line is followed by a space and an attribute character. The following attribute character is supported: ‘c’ The pathname is a conffile. Finally followed by another space and the pathname. In case the entry was of the missing type, and the file was not actually present on the filesystem, then the line is followed by a space and the error message enclosed within parenthesis. --status-fd n Send machine-readable package status and progress information to file descriptor n. This option can be specified multiple times. The information is generally one record per line, in one of the following forms: status: package: status Package status changed; status is as in the status file. status: package : error : extended-error-message An error occurred. Any possible newlines in extended- error-message will be converted to spaces before output. status: file : conffile-prompt : 'real-old' 'real-new' useredited distedited User is being asked a conffile question. processing: stage: package Sent just before a processing stage starts. stage is one of upgrade, install (both sent before unpacking), configure, trigproc, disappear, remove, purge. --status-logger=command Send machine-readable package status and progress information to the shell command's standard input, to be run via “sh -c” (since dpkg 1.16.0). This option can be specified multiple times. The output format used is the same as in --status-fd. --log=filename Log status change updates and actions to filename, instead of the default /usr/local/var/log/dpkg.log. If this option is given multiple times, the last filename is used. Log messages are of the form: YYYY-MM-DD HH:MM:SS startup type command For each dpkg invocation where type is archives (with a command of unpack or install) or packages (with a command of configure, triggers-only, remove or purge). YYYY-MM-DD HH:MM:SS status state pkg installed-version For status change updates. YYYY-MM-DD HH:MM:SS action pkg installed-version available- version For actions where action is one of install, upgrade, configure, trigproc, disappear, remove or purge. YYYY-MM-DD HH:MM:SS conffile filename decision For conffile changes where decision is either install or keep. --robot Use a machine-readable output format. This provides an interface for programs that need to parse the output of some of the commands that do not otherwise emit a machine-readable output format. No localization will be used, and the output will be modified to make it easier to parse. The only currently supported command is --version. --no-pager Disables the use of any pager when showing information (since dpkg 1.19.2). --no-debsig Do not try to verify package signatures. --no-triggers Do not run any triggers in this run (since dpkg 1.14.17), but activations will still be recorded. If used with --configure package or --triggers-only package then the named package postinst will still be run even if only a triggers run is needed. Use of this option may leave packages in the improper triggers-awaited and triggers-pending states. This can be fixed later by running: dpkg --configure --pending. --triggers Cancels a previous --no-triggers (since dpkg 1.14.17). | # dpkg
> Debian package manager. Some subcommands such as `dpkg deb` have their own
> usage documentation. For equivalent commands in other package managers, see
> https://wiki.archlinux.org/title/Pacman/Rosetta. More information:
> https://manpages.debian.org/latest/dpkg/dpkg.html.
* Install a package:
`dpkg -i {{path/to/file.deb}}`
* Remove a package:
`dpkg -r {{package}}`
* List installed packages:
`dpkg -l {{pattern}}`
* List a package's contents:
`dpkg -L {{package}}`
* List contents of a local package file:
`dpkg -c {{path/to/file.deb}}`
* Find out which package owns a file:
`dpkg -S {{path/to/file}}` |
m4 | The m4 utility is a macro processor that shall read one or more text files, process them according to their included macro statements, and write the results to standard output. The m4 utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines, except that the order of the -D and -U options shall be significant, and options can be interspersed with operands. The following options shall be supported: -s Enable line synchronization output for the c99 preprocessor phase (that is, #line directives). -D name[=val] Define name to val or to null if =val is omitted. -U name Undefine name. | # m4
> Macro processor. More information: https://www.gnu.org/software/m4.
* Process macros in a file:
`m4 {{path/to/file}}`
* Define a macro before processing files:
`m4 -D{{macro_name}}={{macro_value}} {{path/to/file}}` |
git-check-ref-format | Checks if a given refname is acceptable, and exits with a non-zero status if it is not. A reference is used in Git to specify branches and tags. A branch head is stored in the refs/heads hierarchy, while a tag is stored in the refs/tags hierarchy of the ref namespace (typically in $GIT_DIR/refs/heads and $GIT_DIR/refs/tags directories or, as entries in file $GIT_DIR/packed-refs if refs are packed by git gc). Git imposes the following rules on how references are named: 1. They can include slash / for hierarchical (directory) grouping, but no slash-separated component can begin with a dot . or end with the sequence .lock. 2. They must contain at least one /. This enforces the presence of a category like heads/, tags/ etc. but the actual names are not restricted. If the --allow-onelevel option is used, this rule is waived. 3. They cannot have two consecutive dots .. anywhere. 4. They cannot have ASCII control characters (i.e. bytes whose values are lower than \040, or \177 DEL), space, tilde ~, caret ^, or colon : anywhere. 5. They cannot have question-mark ?, asterisk *, or open bracket [ anywhere. See the --refspec-pattern option below for an exception to this rule. 6. They cannot begin or end with a slash / or contain multiple consecutive slashes (see the --normalize option below for an exception to this rule) 7. They cannot end with a dot .. 8. They cannot contain a sequence @{. 9. They cannot be the single character @. 10. They cannot contain a \. These rules make it easy for shell script based tools to parse reference names, pathname expansion by the shell when a reference name is used unquoted (by mistake), and also avoid ambiguities in certain reference name expressions (see gitrevisions(7)): 1. A double-dot .. is often used as in ref1..ref2, and in some contexts this notation means ^ref1 ref2 (i.e. not in ref1 and in ref2). 2. A tilde ~ and caret ^ are used to introduce the postfix nth parent and peel onion operation. 3. A colon : is used as in srcref:dstref to mean "use srcref’s value and store it in dstref" in fetch and push operations. It may also be used to select a specific object such as with git cat-file: "git cat-file blob v1.3.3:refs.c". 4. at-open-brace @{ is used as a notation to access a reflog entry. With the --branch option, the command takes a name and checks if it can be used as a valid branch name (e.g. when creating a new branch). But be cautious when using the previous checkout syntax that may refer to a detached HEAD state. The rule git check-ref-format --branch $name implements may be stricter than what git check-ref-format refs/heads/$name says (e.g. a dash may appear at the beginning of a ref component, but it is explicitly forbidden at the beginning of a branch name). When run with --branch option in a repository, the input is first expanded for the “previous checkout syntax” @{-n}. For example, @{-1} is a way to refer the last thing that was checked out using "git switch" or "git checkout" operation. This option should be used by porcelains to accept this syntax anywhere a branch name is expected, so they can act as if you typed the branch name. As an exception note that, the “previous checkout operation” might result in a commit object name when the N-th last thing checked out was not a branch. --[no-]allow-onelevel Controls whether one-level refnames are accepted (i.e., refnames that do not contain multiple /-separated components). The default is --no-allow-onelevel. --refspec-pattern Interpret <refname> as a reference name pattern for a refspec (as used with remote repositories). If this option is enabled, <refname> is allowed to contain a single * in the refspec (e.g., foo/bar*/baz or foo/bar*baz/ but not foo/bar*/baz*). --normalize Normalize refname by removing any leading slash (/) characters and collapsing runs of adjacent slashes between name components into a single slash. If the normalized refname is valid then print it to standard output and exit with a status of 0, otherwise exit with a non-zero status. (--print is a deprecated way to spell --normalize.) | # git check-ref-format
> Checks if a given refname is acceptable, and exits with a non-zero status if
> it is not. More information: https://git-scm.com/docs/git-check-ref-format.
* Check the format of the specified refname:
`git check-ref-format {{refs/head/refname}}`
* Print the name of the last branch checked out:
`git check-ref-format --branch @{-1}`
* Normalize a refname:
`git check-ref-format --normalize {{refs/head/refname}}` |
date | The date utility shall write the date and time to standard output or attempt to set the system date and time. By default, the current date and time shall be written. If an operand beginning with '+' is specified, the output format of date shall be controlled by the conversion specifications and other text in the operand. The date utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following option shall be supported: -u Perform operations as if the TZ environment variable was set to the string "UTC0", or its equivalent historical value of "GMT0". Otherwise, date shall use the timezone indicated by the TZ environment variable or the system default if that variable is unset or null. | # date
> Set or display the system date. More information:
> https://ss64.com/osx/date.html.
* Display the current date using the default locale's format:
`date +%c`
* Display the current date in UTC and ISO 8601 format:
`date -u +%Y-%m-%dT%H:%M:%SZ`
* Display the current date as a Unix timestamp (seconds since the Unix epoch):
`date +%s`
* Display a specific date (represented as a Unix timestamp) using the default format:
`date -r 1473305798` |
git-rebase | If <branch> is specified, git rebase will perform an automatic git switch <branch> before doing anything else. Otherwise it remains on the current branch. If <upstream> is not specified, the upstream configured in branch.<name>.remote and branch.<name>.merge options will be used (see git-config(1) for details) and the --fork-point option is assumed. If you are currently not on any branch or if the current branch does not have a configured upstream, the rebase will abort. All changes made by commits in the current branch but that are not in <upstream> are saved to a temporary area. This is the same set of commits that would be shown by git log <upstream>..HEAD; or by git log 'fork_point'..HEAD, if --fork-point is active (see the description on --fork-point below); or by git log HEAD, if the --root option is specified. The current branch is reset to <upstream> or <newbase> if the --onto option was supplied. This has the exact same effect as git reset --hard <upstream> (or <newbase>). ORIG_HEAD is set to point at the tip of the branch before the reset. Note ORIG_HEAD is not guaranteed to still point to the previous branch tip at the end of the rebase if other commands that write that pseudo-ref (e.g. git reset) are used during the rebase. The previous branch tip, however, is accessible using the reflog of the current branch (i.e. @{1}, see gitrevisions(7)). The commits that were previously saved into the temporary area are then reapplied to the current branch, one by one, in order. Note that any commits in HEAD which introduce the same textual changes as a commit in HEAD..<upstream> are omitted (i.e., a patch already accepted upstream with a different commit message or timestamp will be skipped). It is possible that a merge failure will prevent this process from being completely automatic. You will have to resolve any such merge failure and run git rebase --continue. Another option is to bypass the commit that caused the merge failure with git rebase --skip. To check out the original <branch> and remove the .git/rebase-apply working files, use the command git rebase --abort instead. Assume the following history exists and the current branch is "topic": A---B---C topic / D---E---F---G master From this point, the result of either of the following commands: git rebase master git rebase master topic would be: A'--B'--C' topic / D---E---F---G master NOTE: The latter form is just a short-hand of git checkout topic followed by git rebase master. When rebase exits topic will remain the checked-out branch. If the upstream branch already contains a change you have made (e.g., because you mailed a patch which was applied upstream), then that commit will be skipped and warnings will be issued (if the merge backend is used). For example, running git rebase master on the following history (in which A' and A introduce the same set of changes, but have different committer information): A---B---C topic / D---E---A'---F master will result in: B'---C' topic / D---E---A'---F master Here is how you would transplant a topic branch based on one branch to another, to pretend that you forked the topic branch from the latter branch, using rebase --onto. First let’s assume your topic is based on branch next. For example, a feature developed in topic depends on some functionality which is found in next. o---o---o---o---o master \ o---o---o---o---o next \ o---o---o topic We want to make topic forked from branch master; for example, because the functionality on which topic depends was merged into the more stable master branch. We want our tree to look like this: o---o---o---o---o master | \ | o'--o'--o' topic \ o---o---o---o---o next We can get this using the following command: git rebase --onto master next topic Another example of --onto option is to rebase part of a branch. If we have the following situation: H---I---J topicB / E---F---G topicA / A---B---C---D master then the command git rebase --onto master topicA topicB would result in: H'--I'--J' topicB / | E---F---G topicA |/ A---B---C---D master This is useful when topicB does not depend on topicA. A range of commits could also be removed with rebase. If we have the following situation: E---F---G---H---I---J topicA then the command git rebase --onto topicA~5 topicA~3 topicA would result in the removal of commits F and G: E---H'---I'---J' topicA This is useful if F and G were flawed in some way, or should not be part of topicA. Note that the argument to --onto and the <upstream> parameter can be any valid commit-ish. In case of conflict, git rebase will stop at the first problematic commit and leave conflict markers in the tree. You can use git diff to locate the markers (<<<<<<) and make edits to resolve the conflict. For each file you edit, you need to tell Git that the conflict has been resolved, typically this would be done with git add <filename> After resolving the conflict manually and updating the index with the desired resolution, you can continue the rebasing process with git rebase --continue Alternatively, you can undo the git rebase with git rebase --abort --onto <newbase> Starting point at which to create the new commits. If the --onto option is not specified, the starting point is <upstream>. May be any valid commit, and not just an existing branch name. As a special case, you may use "A...B" as a shortcut for the merge base of A and B if there is exactly one merge base. You can leave out at most one of A and B, in which case it defaults to HEAD. --keep-base Set the starting point at which to create the new commits to the merge base of <upstream> and <branch>. Running git rebase --keep-base <upstream> <branch> is equivalent to running git rebase --reapply-cherry-picks --no-fork-point --onto <upstream>...<branch> <upstream> <branch>. This option is useful in the case where one is developing a feature on top of an upstream branch. While the feature is being worked on, the upstream branch may advance and it may not be the best idea to keep rebasing on top of the upstream but to keep the base commit as-is. As the base commit is unchanged this option implies --reapply-cherry-picks to avoid losing commits. Although both this option and --fork-point find the merge base between <upstream> and <branch>, this option uses the merge base as the starting point on which new commits will be created, whereas --fork-point uses the merge base to determine the set of commits which will be rebased. See also INCOMPATIBLE OPTIONS below. <upstream> Upstream branch to compare against. May be any valid commit, not just an existing branch name. Defaults to the configured upstream for the current branch. <branch> Working branch; defaults to HEAD. --apply Use applying strategies to rebase (calling git-am internally). This option may become a no-op in the future once the merge backend handles everything the apply one does. See also INCOMPATIBLE OPTIONS below. --empty={drop,keep,ask} How to handle commits that are not empty to start and are not clean cherry-picks of any upstream commit, but which become empty after rebasing (because they contain a subset of already upstream changes). With drop (the default), commits that become empty are dropped. With keep, such commits are kept. With ask (implied by --interactive), the rebase will halt when an empty commit is applied allowing you to choose whether to drop it, edit files more, or just commit the empty changes. Other options, like --exec, will use the default of drop unless -i/--interactive is explicitly specified. Note that commits which start empty are kept (unless --no-keep-empty is specified), and commits which are clean cherry-picks (as determined by git log --cherry-mark ...) are detected and dropped as a preliminary step (unless --reapply-cherry-picks or --keep-base is passed). See also INCOMPATIBLE OPTIONS below. --no-keep-empty, --keep-empty Do not keep commits that start empty before the rebase (i.e. that do not change anything from its parent) in the result. The default is to keep commits which start empty, since creating such commits requires passing the --allow-empty override flag to git commit, signifying that a user is very intentionally creating such a commit and thus wants to keep it. Usage of this flag will probably be rare, since you can get rid of commits that start empty by just firing up an interactive rebase and removing the lines corresponding to the commits you don’t want. This flag exists as a convenient shortcut, such as for cases where external tools generate many empty commits and you want them all removed. For commits which do not start empty but become empty after rebasing, see the --empty flag. See also INCOMPATIBLE OPTIONS below. --reapply-cherry-picks, --no-reapply-cherry-picks Reapply all clean cherry-picks of any upstream commit instead of preemptively dropping them. (If these commits then become empty after rebasing, because they contain a subset of already upstream changes, the behavior towards them is controlled by the --empty flag.) In the absence of --keep-base (or if --no-reapply-cherry-picks is given), these commits will be automatically dropped. Because this necessitates reading all upstream commits, this can be expensive in repositories with a large number of upstream commits that need to be read. When using the merge backend, warnings will be issued for each dropped commit (unless --quiet is given). Advice will also be issued unless advice.skippedCherryPicks is set to false (see git-config(1)). --reapply-cherry-picks allows rebase to forgo reading all upstream commits, potentially improving performance. See also INCOMPATIBLE OPTIONS below. --allow-empty-message No-op. Rebasing commits with an empty message used to fail and this option would override that behavior, allowing commits with empty messages to be rebased. Now commits with an empty message do not cause rebasing to halt. See also INCOMPATIBLE OPTIONS below. -m, --merge Using merging strategies to rebase (default). Note that a rebase merge works by replaying each commit from the working branch on top of the <upstream> branch. Because of this, when a merge conflict happens, the side reported as ours is the so-far rebased series, starting with <upstream>, and theirs is the working branch. In other words, the sides are swapped. See also INCOMPATIBLE OPTIONS below. -s <strategy>, --strategy=<strategy> Use the given merge strategy, instead of the default ort. This implies --merge. Because git rebase replays each commit from the working branch on top of the <upstream> branch using the given strategy, using the ours strategy simply empties all patches from the <branch>, which makes little sense. See also INCOMPATIBLE OPTIONS below. -X <strategy-option>, --strategy-option=<strategy-option> Pass the <strategy-option> through to the merge strategy. This implies --merge and, if no strategy has been specified, -s ort. Note the reversal of ours and theirs as noted above for the -m option. See also INCOMPATIBLE OPTIONS below. --rerere-autoupdate, --no-rerere-autoupdate After the rerere mechanism reuses a recorded resolution on the current conflict to update the files in the working tree, allow it to also update the index with the result of resolution. --no-rerere-autoupdate is a good way to double-check what rerere did and catch potential mismerges, before committing the result to the index with a separate git add. -S[<keyid>], --gpg-sign[=<keyid>], --no-gpg-sign GPG-sign commits. The keyid argument is optional and defaults to the committer identity; if specified, it must be stuck to the option without a space. --no-gpg-sign is useful to countermand both commit.gpgSign configuration variable, and earlier --gpg-sign. -q, --quiet Be quiet. Implies --no-stat. -v, --verbose Be verbose. Implies --stat. --stat Show a diffstat of what changed upstream since the last rebase. The diffstat is also controlled by the configuration option rebase.stat. -n, --no-stat Do not show a diffstat as part of the rebase process. --no-verify This option bypasses the pre-rebase hook. See also githooks(5). --verify Allows the pre-rebase hook to run, which is the default. This option can be used to override --no-verify. See also githooks(5). -C<n> Ensure at least <n> lines of surrounding context match before and after each change. When fewer lines of surrounding context exist they all must match. By default no context is ever ignored. Implies --apply. See also INCOMPATIBLE OPTIONS below. --no-ff, --force-rebase, -f Individually replay all rebased commits instead of fast-forwarding over the unchanged ones. This ensures that the entire history of the rebased branch is composed of new commits. You may find this helpful after reverting a topic branch merge, as this option recreates the topic branch with fresh commits so it can be remerged successfully without needing to "revert the reversion" (see the revert-a-faulty-merge How-To[1] for details). --fork-point, --no-fork-point Use reflog to find a better common ancestor between <upstream> and <branch> when calculating which commits have been introduced by <branch>. When --fork-point is active, fork_point will be used instead of <upstream> to calculate the set of commits to rebase, where fork_point is the result of git merge-base --fork-point <upstream> <branch> command (see git-merge-base(1)). If fork_point ends up being empty, the <upstream> will be used as a fallback. If <upstream> or --keep-base is given on the command line, then the default is --no-fork-point, otherwise the default is --fork-point. See also rebase.forkpoint in git-config(1). If your branch was based on <upstream> but <upstream> was rewound and your branch contains commits which were dropped, this option can be used with --keep-base in order to drop those commits from your branch. See also INCOMPATIBLE OPTIONS below. --ignore-whitespace Ignore whitespace differences when trying to reconcile differences. Currently, each backend implements an approximation of this behavior: apply backend When applying a patch, ignore changes in whitespace in context lines. Unfortunately, this means that if the "old" lines being replaced by the patch differ only in whitespace from the existing file, you will get a merge conflict instead of a successful patch application. merge backend Treat lines with only whitespace changes as unchanged when merging. Unfortunately, this means that any patch hunks that were intended to modify whitespace and nothing else will be dropped, even if the other side had no changes that conflicted. --whitespace=<option> This flag is passed to the git apply program (see git-apply(1)) that applies the patch. Implies --apply. See also INCOMPATIBLE OPTIONS below. --committer-date-is-author-date Instead of using the current time as the committer date, use the author date of the commit being rebased as the committer date. This option implies --force-rebase. --ignore-date, --reset-author-date Instead of using the author date of the original commit, use the current time as the author date of the rebased commit. This option implies --force-rebase. See also INCOMPATIBLE OPTIONS below. --signoff Add a Signed-off-by trailer to all the rebased commits. Note that if --interactive is given then only commits marked to be picked, edited or reworded will have the trailer added. See also INCOMPATIBLE OPTIONS below. -i, --interactive Make a list of the commits which are about to be rebased. Let the user edit that list before rebasing. This mode can also be used to split commits (see SPLITTING COMMITS below). The commit list format can be changed by setting the configuration option rebase.instructionFormat. A customized instruction format will automatically have the long commit hash prepended to the format. See also INCOMPATIBLE OPTIONS below. -r, --rebase-merges[=(rebase-cousins|no-rebase-cousins)], --no-rebase-merges By default, a rebase will simply drop merge commits from the todo list, and put the rebased commits into a single, linear branch. With --rebase-merges, the rebase will instead try to preserve the branching structure within the commits that are to be rebased, by recreating the merge commits. Any resolved merge conflicts or manual amendments in these merge commits will have to be resolved/re-applied manually. --no-rebase-merges can be used to countermand both the rebase.rebaseMerges config option and a previous --rebase-merges. When rebasing merges, there are two modes: rebase-cousins and no-rebase-cousins. If the mode is not specified, it defaults to no-rebase-cousins. In no-rebase-cousins mode, commits which do not have <upstream> as direct ancestor will keep their original branch point, i.e. commits that would be excluded by git-log(1)'s --ancestry-path option will keep their original ancestry by default. In rebase-cousins mode, such commits are instead rebased onto <upstream> (or <onto>, if specified). It is currently only possible to recreate the merge commits using the ort merge strategy; different merge strategies can be used only via explicit exec git merge -s <strategy> [...] commands. See also REBASING MERGES and INCOMPATIBLE OPTIONS below. -x <cmd>, --exec <cmd> Append "exec <cmd>" after each line creating a commit in the final history. <cmd> will be interpreted as one or more shell commands. Any command that fails will interrupt the rebase, with exit code 1. You may execute several commands by either using one instance of --exec with several commands: git rebase -i --exec "cmd1 && cmd2 && ..." or by giving more than one --exec: git rebase -i --exec "cmd1" --exec "cmd2" --exec ... If --autosquash is used, exec lines will not be appended for the intermediate commits, and will only appear at the end of each squash/fixup series. This uses the --interactive machinery internally, but it can be run without an explicit --interactive. See also INCOMPATIBLE OPTIONS below. --root Rebase all commits reachable from <branch>, instead of limiting them with an <upstream>. This allows you to rebase the root commit(s) on a branch. See also INCOMPATIBLE OPTIONS below. --autosquash, --no-autosquash When the commit log message begins with "squash! ..." or "fixup! ..." or "amend! ...", and there is already a commit in the todo list that matches the same ..., automatically modify the todo list of rebase -i, so that the commit marked for squashing comes right after the commit to be modified, and change the action of the moved commit from pick to squash or fixup or fixup -C respectively. A commit matches the ... if the commit subject matches, or if the ... refers to the commit’s hash. As a fall-back, partial matches of the commit subject work, too. The recommended way to create fixup/amend/squash commits is by using the --fixup, --fixup=amend: or --fixup=reword: and --squash options respectively of git-commit(1). If the --autosquash option is enabled by default using the configuration variable rebase.autoSquash, this option can be used to override and disable this setting. See also INCOMPATIBLE OPTIONS below. --autostash, --no-autostash Automatically create a temporary stash entry before the operation begins, and apply it after the operation ends. This means that you can run rebase on a dirty worktree. However, use with care: the final stash application after a successful rebase might result in non-trivial conflicts. --reschedule-failed-exec, --no-reschedule-failed-exec Automatically reschedule exec commands that failed. This only makes sense in interactive mode (or when an --exec option was provided). Even though this option applies once a rebase is started, it’s set for the whole rebase at the start based on either the rebase.rescheduleFailedExec configuration (see git-config(1) or "CONFIGURATION" below) or whether this option is provided. Otherwise an explicit --no-reschedule-failed-exec at the start would be overridden by the presence of rebase.rescheduleFailedExec=true configuration. --update-refs, --no-update-refs Automatically force-update any branches that point to commits that are being rebased. Any branches that are checked out in a worktree are not updated in this way. If the configuration variable rebase.updateRefs is set, then this option can be used to override and disable this setting. See also INCOMPATIBLE OPTIONS below. | # git rebase
> Reapply commits from one branch on top of another branch. Commonly used to
> "move" an entire branch to another base, creating copies of the commits in
> the new location. More information: https://git-scm.com/docs/git-rebase.
* Rebase the current branch on top of another specified branch:
`git rebase {{new_base_branch}}`
* Start an interactive rebase, which allows the commits to be reordered, omitted, combined or modified:
`git rebase -i {{target_base_branch_or_commit_hash}}`
* Continue a rebase that was interrupted by a merge failure, after editing conflicting files:
`git rebase --continue`
* Continue a rebase that was paused due to merge conflicts, by skipping the conflicted commit:
`git rebase --skip`
* Abort a rebase in progress (e.g. if it is interrupted by a merge conflict):
`git rebase --abort`
* Move part of the current branch onto a new base, providing the old base to start from:
`git rebase --onto {{new_base}} {{old_base}}`
* Reapply the last 5 commits in-place, stopping to allow them to be reordered, omitted, combined or modified:
`git rebase -i {{HEAD~5}}`
* Auto-resolve any conflicts by favoring the working branch version (`theirs` keyword has reversed meaning in this case):
`git rebase -X theirs {{branch_name}}` |
git-commit-graph | Manage the serialized commit-graph file. --object-dir Use given directory for the location of packfiles and commit-graph file. This parameter exists to specify the location of an alternate that only has the objects directory, not a full .git directory. The commit-graph file is expected to be in the <dir>/info directory and the packfiles are expected to be in <dir>/pack. If the directory could not be made into an absolute path, or does not match any known object directory, git commit-graph ... will exit with non-zero status. --[no-]progress Turn progress on/off explicitly. If neither is specified, progress is shown if standard error is connected to a terminal. | # git commit-graph
> Write and verify Git commit-graph files. More information: https://git-
> scm.com/docs/git-commit-graph.
* Write a commit-graph file for the packed commits in the repository's local `.git` directory:
`git commit-graph write`
* Write a commit-graph file containing all reachable commits:
`git show-ref --hash | git commit-graph write --stdin-commits`
* Write a commit-graph file containing all commits in the current commit-graph file along with those reachable from `HEAD`:
`git rev-parse {{HEAD}} | git commit-graph write --stdin-commits --append` |
chroot | Run COMMAND with root directory set to NEWROOT. --groups=G_LIST specify supplementary groups as g1,g2,..,gN --userspec=USER:GROUP specify user and group (ID or name) to use --skip-chdir do not change working directory to '/' --help display this help and exit --version output version information and exit If no command is given, run '"$SHELL" -i' (default: '/bin/sh -i'). Exit status: 125 if the chroot command itself fails 126 if COMMAND is found but cannot be invoked 127 if COMMAND cannot be found - the exit status of COMMAND otherwise | # chroot
> Run command or interactive shell with special root directory. More
> information: https://www.gnu.org/software/coreutils/chroot.
* Run command as new root directory:
`chroot {{path/to/new/root}} {{command}}`
* Specify user and group (ID or name) to use:
`chroot --userspec={{user:group}}` |
mesg | The mesg utility is invoked by a user to control write access others have to the terminal device associated with standard error output. If write access is allowed, then programs such as talk(1) and write(1) may display messages on the terminal. Traditionally, write access is allowed by default. However, as users become more conscious of various security risks, there is a trend to remove write access by default, at least for the primary login shell. To make sure your ttys are set the way you want them to be set, mesg should be executed in your login scripts. The mesg utility silently exits with error status 2 if not executed on a terminal. In this case executing mesg is pointless. The command line option --verbose forces mesg to print a warning in this situation. This behaviour has been introduced in version 2.33. -v, --verbose Explain what is being done. -h, --help Display help text and exit. -V, --version Print version and exit. | # mesg
> Check or set a terminal's ability to receive messages from other users,
> usually from the write command. See also `write`. More information:
> https://manned.org/mesg.
* Check terminal's openness to write messages:
`mesg`
* Disable receiving messages from the write command:
`mesg n`
* Enable receiving messages from the write command:
`mesg y` |
grep | grep searches for PATTERNS in each FILE. PATTERNS is one or more patterns separated by newline characters, and grep prints each line that matches a pattern. Typically PATTERNS should be quoted when grep is used in a shell command. A FILE of “-” stands for standard input. If no FILE is given, recursive searches examine the working directory, and nonrecursive searches read standard input. Generic Program Information --help Output a usage message and exit. -V, --version Output the version number of grep and exit. Pattern Syntax -E, --extended-regexp Interpret PATTERNS as extended regular expressions (EREs, see below). -F, --fixed-strings Interpret PATTERNS as fixed strings, not regular expressions. -G, --basic-regexp Interpret PATTERNS as basic regular expressions (BREs, see below). This is the default. -P, --perl-regexp Interpret PATTERNS as Perl-compatible regular expressions (PCREs). This option is experimental when combined with the -z (--null-data) option, and grep -P may warn of unimplemented features. Matching Control -e PATTERNS, --regexp=PATTERNS Use PATTERNS as the patterns. If this option is used multiple times or is combined with the -f (--file) option, search for all patterns given. This option can be used to protect a pattern beginning with “-”. -f FILE, --file=FILE Obtain patterns from FILE, one per line. If this option is used multiple times or is combined with the -e (--regexp) option, search for all patterns given. The empty file contains zero patterns, and therefore matches nothing. If FILE is - , read patterns from standard input. -i, --ignore-case Ignore case distinctions in patterns and input data, so that characters that differ only in case match each other. --no-ignore-case Do not ignore case distinctions in patterns and input data. This is the default. This option is useful for passing to shell scripts that already use -i, to cancel its effects because the two options override each other. -v, --invert-match Invert the sense of matching, to select non-matching lines. -w, --word-regexp Select only those lines containing matches that form whole words. The test is that the matching substring must either be at the beginning of the line, or preceded by a non-word constituent character. Similarly, it must be either at the end of the line or followed by a non-word constituent character. Word-constituent characters are letters, digits, and the underscore. This option has no effect if -x is also specified. -x, --line-regexp Select only those matches that exactly match the whole line. For a regular expression pattern, this is like parenthesizing the pattern and then surrounding it with ^ and $. General Output Control -c, --count Suppress normal output; instead print a count of matching lines for each input file. With the -v, --invert-match option (see above), count non-matching lines. --color[=WHEN], --colour[=WHEN] Surround the matched (non-empty) strings, matching lines, context lines, file names, line numbers, byte offsets, and separators (for fields and groups of context lines) with escape sequences to display them in color on the terminal. The colors are defined by the environment variable GREP_COLORS. WHEN is never, always, or auto. -L, --files-without-match Suppress normal output; instead print the name of each input file from which no output would normally have been printed. -l, --files-with-matches Suppress normal output; instead print the name of each input file from which output would normally have been printed. Scanning each input file stops upon first match. -m NUM, --max-count=NUM Stop reading a file after NUM matching lines. If NUM is zero, grep stops right away without reading input. A NUM of -1 is treated as infinity and grep does not stop; this is the default. If the input is standard input from a regular file, and NUM matching lines are output, grep ensures that the standard input is positioned to just after the last matching line before exiting, regardless of the presence of trailing context lines. This enables a calling process to resume a search. When grep stops after NUM matching lines, it outputs any trailing context lines. When the -c or --count option is also used, grep does not output a count greater than NUM. When the -v or --invert-match option is also used, grep stops after outputting NUM non-matching lines. -o, --only-matching Print only the matched (non-empty) parts of a matching line, with each such part on a separate output line. -q, --quiet, --silent Quiet; do not write anything to standard output. Exit immediately with zero status if any match is found, even if an error was detected. Also see the -s or --no-messages option. -s, --no-messages Suppress error messages about nonexistent or unreadable files. Output Line Prefix Control -b, --byte-offset Print the 0-based byte offset within the input file before each line of output. If -o (--only-matching) is specified, print the offset of the matching part itself. -H, --with-filename Print the file name for each match. This is the default when there is more than one file to search. This is a GNU extension. -h, --no-filename Suppress the prefixing of file names on output. This is the default when there is only one file (or only standard input) to search. --label=LABEL Display input actually coming from standard input as input coming from file LABEL. This can be useful for commands that transform a file's contents before searching, e.g., gzip -cd foo.gz | grep --label=foo -H 'some pattern'. See also the -H option. -n, --line-number Prefix each line of output with the 1-based line number within its input file. -T, --initial-tab Make sure that the first character of actual line content lies on a tab stop, so that the alignment of tabs looks normal. This is useful with options that prefix their output to the actual content: -H,-n, and -b. In order to improve the probability that lines from a single file will all start at the same column, this also causes the line number and byte offset (if present) to be printed in a minimum size field width. -Z, --null Output a zero byte (the ASCII NUL character) instead of the character that normally follows a file name. For example, grep -lZ outputs a zero byte after each file name instead of the usual newline. This option makes the output unambiguous, even in the presence of file names containing unusual characters like newlines. This option can be used with commands like find -print0, perl -0, sort -z, and xargs -0 to process arbitrary file names, even those that contain newline characters. Context Line Control -A NUM, --after-context=NUM Print NUM lines of trailing context after matching lines. Places a line containing a group separator (--) between contiguous groups of matches. With the -o or --only-matching option, this has no effect and a warning is given. -B NUM, --before-context=NUM Print NUM lines of leading context before matching lines. Places a line containing a group separator (--) between contiguous groups of matches. With the -o or --only-matching option, this has no effect and a warning is given. -C NUM, -NUM, --context=NUM Print NUM lines of output context. Places a line containing a group separator (--) between contiguous groups of matches. With the -o or --only-matching option, this has no effect and a warning is given. --group-separator=SEP When -A, -B, or -C are in use, print SEP instead of -- between groups of lines. --no-group-separator When -A, -B, or -C are in use, do not print a separator between groups of lines. File and Directory Selection -a, --text Process a binary file as if it were text; this is equivalent to the --binary-files=text option. --binary-files=TYPE If a file's data or metadata indicate that the file contains binary data, assume that the file is of type TYPE. Non-text bytes indicate binary data; these are either output bytes that are improperly encoded for the current locale, or null input bytes when the -z option is not given. By default, TYPE is binary, and grep suppresses output after null input binary data is discovered, and suppresses output lines that contain improperly encoded data. When some output is suppressed, grep follows any output with a message to standard error saying that a binary file matches. If TYPE is without-match, when grep discovers null input binary data it assumes that the rest of the file does not match; this is equivalent to the -I option. If TYPE is text, grep processes a binary file as if it were text; this is equivalent to the -a option. When type is binary, grep may treat non-text bytes as line terminators even without the -z option. This means choosing binary versus text can affect whether a pattern matches a file. For example, when type is binary the pattern q$ might match q immediately followed by a null byte, even though this is not matched when type is text. Conversely, when type is binary the pattern . (period) might not match a null byte. Warning: The -a option might output binary garbage, which can have nasty side effects if the output is a terminal and if the terminal driver interprets some of it as commands. On the other hand, when reading files whose text encodings are unknown, it can be helpful to use -a or to set LC_ALL='C' in the environment, in order to find more matches even if the matches are unsafe for direct display. -D ACTION, --devices=ACTION If an input file is a device, FIFO or socket, use ACTION to process it. By default, ACTION is read, which means that devices are read just as if they were ordinary files. If ACTION is skip, devices are silently skipped. -d ACTION, --directories=ACTION If an input file is a directory, use ACTION to process it. By default, ACTION is read, i.e., read directories just as if they were ordinary files. If ACTION is skip, silently skip directories. If ACTION is recurse, read all files under each directory, recursively, following symbolic links only if they are on the command line. This is equivalent to the -r option. --exclude=GLOB Skip any command-line file with a name suffix that matches the pattern GLOB, using wildcard matching; a name suffix is either the whole name, or a trailing part that starts with a non-slash character immediately after a slash (/) in the name. When searching recursively, skip any subfile whose base name matches GLOB; the base name is the part after the last slash. A pattern can use *, ?, and [...] as wildcards, and \ to quote a wildcard or backslash character literally. --exclude-from=FILE Skip files whose base name matches any of the file-name globs read from FILE (using wildcard matching as described under --exclude). --exclude-dir=GLOB Skip any command-line directory with a name suffix that matches the pattern GLOB. When searching recursively, skip any subdirectory whose base name matches GLOB. Ignore any redundant trailing slashes in GLOB. -I Process a binary file as if it did not contain matching data; this is equivalent to the --binary-files=without-match option. --include=GLOB Search only files whose base name matches GLOB (using wildcard matching as described under --exclude). If contradictory --include and --exclude options are given, the last matching one wins. If no --include or --exclude options match, a file is included unless the first such option is --include. -r, --recursive Read all files under each directory, recursively, following symbolic links only if they are on the command line. Note that if no file operand is given, grep searches the working directory. This is equivalent to the -d recurse option. -R, --dereference-recursive Read all files under each directory, recursively. Follow all symbolic links, unlike -r. Other Options --line-buffered Use line buffering on output. This can cause a performance penalty. -U, --binary Treat the file(s) as binary. By default, under MS-DOS and MS-Windows, grep guesses whether a file is text or binary as described for the --binary-files option. If grep decides the file is a text file, it strips the CR characters from the original file contents (to make regular expressions with ^ and $ work correctly). Specifying -U overrules this guesswork, causing all files to be read and passed to the matching mechanism verbatim; if the file is a text file with CR/LF pairs at the end of each line, this will cause some regular expressions to fail. This option has no effect on platforms other than MS-DOS and MS-Windows. -z, --null-data Treat input and output data as sequences of lines, each terminated by a zero byte (the ASCII NUL character) instead of a newline. Like the -Z or --null option, this option can be used with commands like sort -z to process arbitrary file names. | # grep
> Find patterns in files using regular expressions. More information:
> https://www.gnu.org/software/grep/manual/grep.html.
* Search for a pattern within a file:
`grep "{{search_pattern}}" {{path/to/file}}`
* Search for an exact string (disables regular expressions):
`grep --fixed-strings "{{exact_string}}" {{path/to/file}}`
* Search for a pattern in all files recursively in a directory, showing line numbers of matches, ignoring binary files:
`grep --recursive --line-number --binary-files={{without-match}}
"{{search_pattern}}" {{path/to/directory}}`
* Use extended regular expressions (supports `?`, `+`, `{}`, `()` and `|`), in case-insensitive mode:
`grep --extended-regexp --ignore-case "{{search_pattern}}" {{path/to/file}}`
* Print 3 lines of context around, before, or after each match:
`grep --{{context|before-context|after-context}}={{3}} "{{search_pattern}}"
{{path/to/file}}`
* Print file name and line number for each match with color output:
`grep --with-filename --line-number --color=always "{{search_pattern}}"
{{path/to/file}}`
* Search for lines matching a pattern, printing only the matched text:
`grep --only-matching "{{search_pattern}}" {{path/to/file}}`
* Search `stdin` for lines that do not match a pattern:
`cat {{path/to/file}} | grep --invert-match "{{search_pattern}}"` |
less | Less is a program similar to more(1), but which allows backward movement in the file as well as forward movement. Also, less does not have to read the entire input file before starting, so with large input files it starts up faster than text editors like vi(1). Less uses termcap (or terminfo on some systems), so it can run on a variety of terminals. There is even limited support for hardcopy terminals. (On a hardcopy terminal, lines which should be printed at the top of the screen are prefixed with a caret.) Commands are based on both more and vi. Commands may be preceded by a decimal number, called N in the descriptions below. The number is used by some commands, as indicated. Command line options are described below. Most options may be changed while less is running, via the "-" command. Some options may be given in one of two forms: either a dash followed by a single letter, or two dashes followed by a long option name. A long option name may be abbreviated as long as the abbreviation is unambiguous. For example, --quit-at-eof may be abbreviated --quit, but not --qui, since both --quit-at-eof and --quiet begin with --qui. Some long option names are in uppercase, such as --QUIT-AT-EOF, as distinct from --quit-at-eof. Such option names need only have their first letter capitalized; the remainder of the name may be in either case. For example, --Quit-at-eof is equivalent to --QUIT-AT-EOF. Options are also taken from the environment variable "LESS". For example, to avoid typing "less -options ..." each time less is invoked, you might tell csh: setenv LESS "-options" or if you use sh: LESS="-options"; export LESS On MS-DOS, you don't need the quotes, but you should replace any percent signs in the options string by double percent signs. The environment variable is parsed before the command line, so command line options override the LESS environment variable. If an option appears in the LESS variable, it can be reset to its default value on the command line by beginning the command line option with "-+". Some options like -k or -D require a string to follow the option letter. The string for that option is considered to end when a dollar sign ($) is found. For example, you can set two -D options like this: LESS="Dn9.1$Ds4.1" If the --use-backslash option appears earlier in the options, then a dollar sign or backslash may be included literally in an option string by preceding it with a backslash. If the --use- backslash option is not in effect, then backslashes are not treated specially, and there is no way to include a dollar sign in the option string. -? or --help This option displays a summary of the commands accepted by less (the same as the h command). (Depending on how your shell interprets the question mark, it may be necessary to quote the question mark, thus: "-\?".) -a or --search-skip-screen By default, forward searches start at the top of the displayed screen and backwards searches start at the bottom of the displayed screen (except for repeated searches invoked by the n or N commands, which start after or before the "target" line respectively; see the -j option for more about the target line). The -a option causes forward searches to instead start at the bottom of the screen and backward searches to start at the top of the screen, thus skipping all lines displayed on the screen. -A or --SEARCH-SKIP-SCREEN Causes all forward searches (not just non-repeated searches) to start just after the target line, and all backward searches to start just before the target line. Thus, forward searches will skip part of the displayed screen (from the first line up to and including the target line). Similarly backwards searches will skip the displayed screen from the last line up to and including the target line. This was the default behavior in less versions prior to 441. -bn or --buffers=n Specifies the amount of buffer space less will use for each file, in units of kilobytes (1024 bytes). By default 64 KB of buffer space is used for each file (unless the file is a pipe; see the -B option). The -b option specifies instead that n kilobytes of buffer space should be used for each file. If n is -1, buffer space is unlimited; that is, the entire file can be read into memory. -B or --auto-buffers By default, when data is read from a pipe, buffers are allocated automatically as needed. If a large amount of data is read from the pipe, this can cause a large amount of memory to be allocated. The -B option disables this automatic allocation of buffers for pipes, so that only 64 KB (or the amount of space specified by the -b option) is used for the pipe. Warning: use of -B can result in erroneous display, since only the most recently viewed part of the piped data is kept in memory; any earlier data is lost. Lost characters are displayed as question marks. -c or --clear-screen Causes full screen repaints to be painted from the top line down. By default, full screen repaints are done by scrolling from the bottom of the screen. -C or --CLEAR-SCREEN Same as -c, for compatibility with older versions of less. -d or --dumb The -d option suppresses the error message normally displayed if the terminal is dumb; that is, lacks some important capability, such as the ability to clear the screen or scroll backward. The -d option does not otherwise change the behavior of less on a dumb terminal. -Dxcolor or --color=xcolor Changes the color of different parts of the displayed text. x is a single character which selects the type of text whose color is being set: B Binary characters. C Control characters. E Errors and informational messages. H Header lines and columns, set via the --header option. M Mark letters in the status column. N Line numbers enabled via the -N option. P Prompts. R The rscroll character. S Search results. 1-5 The text in a search result which matches the first through fifth parenthesized sub-pattern. Sub- pattern coloring works only if less is built with one of the regular expression libraries posix, pcre, or pcre2. W The highlight enabled via the -w option. d Bold text. k Blinking text. s Standout text. u Underlined text. The uppercase letters and digits can be used only when the --use-color option is enabled. When text color is specified by both an uppercase letter and a lowercase letter, the uppercase letter takes precedence. For example, error messages are normally displayed as standout text. So if both "s" and "E" are given a color, the "E" color applies to error messages, and the "s" color applies to other standout text. The "d" and "u" letters refer to bold and underline text formed by overstriking with backspaces (see the -U option), not to text using ANSI escape sequences with the -R option. A lowercase letter may be followed by a + to indicate that the normal format change and the specified color should both be used. For example, -Dug displays underlined text as green without underlining; the green color has replaced the usual underline formatting. But -Du+g displays underlined text as both green and in underlined format. color is either a 4-bit color string or an 8-bit color string: A 4-bit color string is zero, one or two characters, where the first character specifies the foreground color and the second specifies the background color as follows: b Blue c Cyan g Green k Black m Magenta r Red w White y Yellow The corresponding uppercase letter denotes a brighter shade of the color. For example, -DNGk displays line numbers as bright green text on a black background, and -DEbR displays error messages as blue text on a bright red background. If either character is a "-" or is omitted, the corresponding color is set to that of normal text. An 8-bit color string is one or two decimal integers separated by a dot, where the first integer specifies the foreground color and the second specifies the background color. Each integer is a value between 0 and 255 inclusive which selects a "CSI 38;5" color value (see https://en.wikipedia.org/wiki/ANSI_escape_code#SGR) If either integer is a "-" or is omitted, the corresponding color is set to that of normal text. On MS-DOS versions of less, 8-bit color is not supported; instead, decimal values are interpreted as 4-bit CHAR_INFO.Attributes values (see https://docs.microsoft.com/en-us/windows/console/char-info-str). -e or --quit-at-eof Causes less to automatically exit the second time it reaches end-of-file. By default, the only way to exit less is via the "q" command. -E or --QUIT-AT-EOF Causes less to automatically exit the first time it reaches end-of-file. -f or --force Forces non-regular files to be opened. (A non-regular file is a directory or a device special file.) Also suppresses the warning message when a binary file is opened. By default, less will refuse to open non-regular files. Note that some operating systems will not allow directories to be read, even if -f is set. -F or --quit-if-one-screen Causes less to automatically exit if the entire file can be displayed on the first screen. -g or --hilite-search Normally, less will highlight ALL strings which match the last search command. The -g option changes this behavior to highlight only the particular string which was found by the last search command. This can cause less to run somewhat faster than the default. -G or --HILITE-SEARCH The -G option suppresses all highlighting of strings found by search commands. -hn or --max-back-scroll=n Specifies a maximum number of lines to scroll backward. If it is necessary to scroll backward more than n lines, the screen is repainted in a forward direction instead. (If the terminal does not have the ability to scroll backward, -h0 is implied.) -i or --ignore-case Causes searches to ignore case; that is, uppercase and lowercase are considered identical. This option is ignored if any uppercase letters appear in the search pattern; in other words, if a pattern contains uppercase letters, then that search does not ignore case. -I or --IGNORE-CASE Like -i, but searches ignore case even if the pattern contains uppercase letters. -jn or --jump-target=n Specifies a line on the screen where the "target" line is to be positioned. The target line is the line specified by any command to search for a pattern, jump to a line number, jump to a file percentage or jump to a tag. The screen line may be specified by a number: the top line on the screen is 1, the next is 2, and so on. The number may be negative to specify a line relative to the bottom of the screen: the bottom line on the screen is -1, the second to the bottom is -2, and so on. Alternately, the screen line may be specified as a fraction of the height of the screen, starting with a decimal point: .5 is in the middle of the screen, .3 is three tenths down from the first line, and so on. If the line is specified as a fraction, the actual line number is recalculated if the terminal window is resized. If any form of the -j option is used, repeated forward searches (invoked with "n" or "N") begin at the line immediately after the target line, and repeated backward searches begin at the target line, unless changed by -a or -A. For example, if "-j4" is used, the target line is the fourth line on the screen, so forward searches begin at the fifth line on the screen. However nonrepeated searches (invoked with "/" or "?") always begin at the start or end of the current screen respectively. -J or --status-column Displays a status column at the left edge of the screen. The character displayed in the status column may be one of: > The line is chopped with the -S option, and the text that is chopped off beyond the right edge of the screen contains a match for the current search. < The line is horizontally shifted, and the text that is shifted beyond the left side of the screen contains a match for the current search. = The line is both chopped and shifted, and there are matches beyond both sides of the screen. * There are matches in the visible part of the line but none to the right or left of it. a-z, A-Z The line has been marked with the corresponding letter via the m command. -kfilename or --lesskey-file=filename Causes less to open and interpret the named file as a lesskey(1) binary file. Multiple -k options may be specified. If the LESSKEY or LESSKEY_SYSTEM environment variable is set, or if a lesskey file is found in a standard place (see KEY BINDINGS), it is also used as a lesskey file. --lesskey-src=filename Causes less to open and interpret the named file as a lesskey(1) source file. If the LESSKEYIN or LESSKEYIN_SYSTEM environment variable is set, or if a lesskey source file is found in a standard place (see KEY BINDINGS), it is also used as a lesskey source file. Prior to version 582, the lesskey program needed to be run to convert a lesskey source file to a lesskey binary file for less to use. Newer versions of less read the lesskey source file directly and ignore the binary file if the source file exists. -K or --quit-on-intr Causes less to exit immediately (with status 2) when an interrupt character (usually ^C) is typed. Normally, an interrupt character causes less to stop whatever it is doing and return to its command prompt. Note that use of this option makes it impossible to return to the command prompt from the "F" command. -L or --no-lessopen Ignore the LESSOPEN environment variable (see the INPUT PREPROCESSOR section below). This option can be set from within less, but it will apply only to files opened subsequently, not to the file which is currently open. -m or --long-prompt Causes less to prompt verbosely (like more(1)), with the percent into the file. By default, less prompts with a colon. -M or --LONG-PROMPT Causes less to prompt even more verbosely than more(1). -n or --line-numbers Suppresses line numbers. The default (to use line numbers) may cause less to run more slowly in some cases, especially with a very large input file. Suppressing line numbers with the -n option will avoid this problem. Using line numbers means: the line number will be displayed in the verbose prompt and in the = command, and the v command will pass the current line number to the editor (see also the discussion of LESSEDIT in PROMPTS below). -N or --LINE-NUMBERS Causes a line number to be displayed at the beginning of each line in the display. -ofilename or --log-file=filename Causes less to copy its input to the named file as it is being viewed. This applies only when the input file is a pipe, not an ordinary file. If the file already exists, less will ask for confirmation before overwriting it. -Ofilename or --LOG-FILE=filename The -O option is like -o, but it will overwrite an existing file without asking for confirmation. If no log file has been specified, the -o and -O options can be used from within less to specify a log file. Without a file name, they will simply report the name of the log file. The "s" command is equivalent to specifying -o from within less. -ppattern or --pattern=pattern The -p option on the command line is equivalent to specifying +/pattern; that is, it tells less to start at the first occurrence of pattern in the file. -Pprompt or --prompt=prompt Provides a way to tailor the three prompt styles to your own preference. This option would normally be put in the LESS environment variable, rather than being typed in with each less command. Such an option must either be the last option in the LESS variable, or be terminated by a dollar sign. -Ps followed by a string changes the default (short) prompt to that string. -Pm changes the medium (-m) prompt. -PM changes the long (-M) prompt. -Ph changes the prompt for the help screen. -P= changes the message printed by the = command. -Pw changes the message printed while waiting for data (in the "F" command). All prompt strings consist of a sequence of letters and special escape sequences. See the section on PROMPTS for more details. -q or --quiet or --silent Causes moderately "quiet" operation: the terminal bell is not rung if an attempt is made to scroll past the end of the file or before the beginning of the file. If the terminal has a "visual bell", it is used instead. The bell will be rung on certain other errors, such as typing an invalid character. The default is to ring the terminal bell in all such cases. -Q or --QUIET or --SILENT Causes totally "quiet" operation: the terminal bell is never rung. If the terminal has a "visual bell", it is used in all cases where the terminal bell would have been rung. -r or --raw-control-chars Causes "raw" control characters to be displayed. The default is to display control characters using the caret notation; for example, a control-A (octal 001) is displayed as "^A" (with some exceptions as described under the -U option). Warning: when the -r option is used, less cannot keep track of the actual appearance of the screen (since this depends on how the screen responds to each type of control character). Thus, various display problems may result, such as long lines being split in the wrong place. USE OF THE -r OPTION IS NOT RECOMMENDED. -R or --RAW-CONTROL-CHARS Like -r, but only ANSI "color" escape sequences and OSC 8 hyperlink sequences are output in "raw" form. Unlike -r, the screen appearance is maintained correctly, provided that there are no escape sequences in the file other than these types of escape sequences. Color escape sequences are only supported when the color is changed within one line, not across lines. In other words, the beginning of each line is assumed to be normal (non-colored), regardless of any escape sequences in previous lines. For the purpose of keeping track of screen appearance, these escape sequences are assumed to not move the cursor. OSC 8 hyperlinks are sequences of the form: ESC ] 8 ; ... \7 The terminating sequence may be either a BEL character (\7) or the two-character sequence "ESC \". ANSI color escape sequences are sequences of the form: ESC [ ... m where the "..." is zero or more color specification characters. You can make less think that characters other than "m" can end ANSI color escape sequences by setting the environment variable LESSANSIENDCHARS to the list of characters which can end a color escape sequence. And you can make less think that characters other than the standard ones may appear between the ESC and the m by setting the environment variable LESSANSIMIDCHARS to the list of characters which can appear. -s or --squeeze-blank-lines Causes consecutive blank lines to be squeezed into a single blank line. This is useful when viewing nroff output. -S or --chop-long-lines Causes lines longer than the screen width to be chopped (truncated) rather than wrapped. That is, the portion of a long line that does not fit in the screen width is not displayed until you press RIGHT-ARROW. The default is to wrap long lines; that is, display the remainder on the next line. See also the --wordwrap option. -ttag or --tag=tag The -t option, followed immediately by a TAG, will edit the file containing that tag. For this to work, tag information must be available; for example, there may be a file in the current directory called "tags", which was previously built by ctags(1) or an equivalent command. If the environment variable LESSGLOBALTAGS is set, it is taken to be the name of a command compatible with global(1), and that command is executed to find the tag. (See http://www.gnu.org/software/global/global.html). The -t option may also be specified from within less (using the - command) as a way of examining a new file. The command ":t" is equivalent to specifying -t from within less. -Ttagsfile or --tag-file=tagsfile Specifies a tags file to be used instead of "tags". -u or --underline-special Causes backspaces and carriage returns to be treated as printable characters; that is, they are sent to the terminal when they appear in the input. -U or --UNDERLINE-SPECIAL Causes backspaces, tabs, carriage returns and "formatting characters" (as defined by Unicode) to be treated as control characters; that is, they are handled as specified by the -r option. By default, if neither -u nor -U is given, backspaces which appear adjacent to an underscore character are treated specially: the underlined text is displayed using the terminal's hardware underlining capability. Also, backspaces which appear between two identical characters are treated specially: the overstruck text is printed using the terminal's hardware boldface capability. Other backspaces are deleted, along with the preceding character. Carriage returns immediately followed by a newline are deleted. Other carriage returns are handled as specified by the -r option. Unicode formatting characters, such as the Byte Order Mark, are sent to the terminal. Text which is overstruck or underlined can be searched for if neither -u nor -U is in effect. See also the --proc-backspace, --proc-tab, and --proc- return options. -V or --version Displays the version number of less. -w or --hilite-unread Temporarily highlights the first "new" line after a forward movement of a full page. The first "new" line is the line immediately following the line previously at the bottom of the screen. Also highlights the target line after a g or p command. The highlight is removed at the next command which causes movement. If the --status-line option is in effect, the entire line (the width of the screen) is highlighted. Otherwise, only the text in the line is highlighted, unless the -J option is in effect, in which case only the status column is highlighted. -W or --HILITE-UNREAD Like -w, but temporarily highlights the first new line after any forward movement command larger than one line. -xn,... or --tabs=n,... Sets tab stops. If only one n is specified, tab stops are set at multiples of n. If multiple values separated by commas are specified, tab stops are set at those positions, and then continue with the same spacing as the last two. For example, "-x9,17" will set tabs at positions 9, 17, 25, 33, etc. The default for n is 8. -X or --no-init Disables sending the termcap initialization and deinitialization strings to the terminal. This is sometimes desirable if the deinitialization string does something unnecessary, like clearing the screen. -yn or --max-forw-scroll=n Specifies a maximum number of lines to scroll forward. If it is necessary to scroll forward more than n lines, the screen is repainted instead. The -c or -C option may be used to repaint from the top of the screen if desired. By default, any forward movement causes scrolling. -zn or --window=n or -n Changes the default scrolling window size to n lines. The default is one screenful. The z and w commands can also be used to change the window size. The "z" may be omitted for compatibility with some versions of more(1). If the number n is negative, it indicates n lines less than the current screen size. For example, if the screen is 24 lines, -z-4 sets the scrolling window to 20 lines. If the screen is resized to 40 lines, the scrolling window automatically changes to 36 lines. -"cc or --quotes=cc Changes the filename quoting character. This may be necessary if you are trying to name a file which contains both spaces and quote characters. Followed by a single character, this changes the quote character to that character. Filenames containing a space should then be surrounded by that character rather than by double quotes. Followed by two characters, changes the open quote to the first character, and the close quote to the second character. Filenames containing a space should then be preceded by the open quote character and followed by the close quote character. Note that even after the quote characters are changed, this option remains -" (a dash followed by a double quote). -~ or --tilde Normally lines after end of file are displayed as a single tilde (~). This option causes lines after end of file to be displayed as blank lines. -# or --shift Specifies the default number of positions to scroll horizontally in the RIGHTARROW and LEFTARROW commands. If the number specified is zero, it sets the default number of positions to one half of the screen width. Alternately, the number may be specified as a fraction of the width of the screen, starting with a decimal point: .5 is half of the screen width, .3 is three tenths of the screen width, and so on. If the number is specified as a fraction, the actual number of scroll positions is recalculated if the terminal window is resized. --exit-follow-on-close When using the "F" command on a pipe, less will automatically stop waiting for more data when the input side of the pipe is closed. --file-size If --file-size is specified, less will determine the size of the file immediately after opening the file. Then the "=" command will display the number of lines in the file. Normally this is not done, because it can be slow if the input file is non-seekable (such as a pipe) and is large. --follow-name Normally, if the input file is renamed while an F command is executing, less will continue to display the contents of the original file despite its name change. If --follow-name is specified, during an F command less will periodically attempt to reopen the file by name. If the reopen succeeds and the file is a different file from the original (which means that a new file has been created with the same name as the original (now renamed) file), less will display the contents of that new file. --header=N[,M] Sets the number of header lines and columns displayed on the screen. The value may be of the form "N,M" where N and M are integers, to set the header lines to N and the header columns to M, or it may be a single integer "N" which sets the header lines to N and the header columns to zero, or it may be ",M" which sets the header columns to M and the header lines to zero. When N is nonzero, the first N lines at the top of the screen are replaced with the first N lines of the file, regardless of what part of the file are being viewed. When M is nonzero, the characters displayed at the beginning of each line are replaced with the first M characters of the line, even if the rest of the line is scrolled horizontally. If either N or M is zero, less stops displaying header lines or columns, respectively. (Note that it may be necessary to change the setting of the -j option to ensure that the target line is not obscured by the header line(s).) --incsearch Subsequent search commands will be "incremental"; that is, less will advance to the next line containing the search pattern as each character of the pattern is typed in. --intr=c Use the character c instead of ^X to interrupt a read when the "Waiting for data" message is displayed. c must be an ASCII character; that is, one with a value between 1 and 127 inclusive. A caret followed by a single character can be used to specify a control character. --line-num-width=n Sets the minimum width of the line number field when the -N option is in effect to n characters. The default is 7. --modelines=n Before displaying a file, less will read the first n lines to try to find a vim-compatible modeline. If n is zero, less does not try to find modelines. By using a modeline, the file itself can specify the tab stops that should be used when viewing it. A modeline contains, anywhere in the line, a program name ("vi", "vim", "ex", or "less"), followed by a colon, possibly followed by the word "set", and finally followed by zero or more option settings. If the word "set" is used, option settings are separated by spaces, and end at the first colon. If the word "set" is not used, option settings may be separated by either spaces or colons. The word "set" is required if the program name is "less" but optional if any of the other three names are used. If any option setting is of the form "tabstop=n" or "ts=n", then tab stops are automatically set as if --tabs=n had been given. See the --tabs description for acceptable values of n. --mouse Enables mouse input: scrolling the mouse wheel down moves forward in the file, scrolling the mouse wheel up moves backwards in the file, and clicking the mouse sets the "#" mark to the line where the mouse is clicked. The number of lines to scroll when the wheel is moved can be set by the --wheel-lines option. Mouse input works only on terminals which support X11 mouse reporting, and on the Windows version of less. --MOUSE Like --mouse, except the direction scrolled on mouse wheel movement is reversed. --no-keypad Disables sending the keypad initialization and deinitialization strings to the terminal. This is sometimes useful if the keypad strings make the numeric keypad behave in an undesirable manner. --no-histdups This option changes the behavior so that if a search string or file name is typed in, and the same string is already in the history list, the existing copy is removed from the history list before the new one is added. Thus, a given string will appear only once in the history list. Normally, a string may appear multiple times. --no-number-headers Header lines (defined via the --header option) are not assigned line numbers. Line number 1 is assigned to the first line after any header lines. --no-search-headers Searches do not include header lines or header columns. --no-vbell Disables the terminal's visual bell. --proc-backspace If set, backspaces are handled as if neither the -u option nor the -U option were set. That is, a backspace adjacent to an underscore causes text to be displayed in underline mode, and a backspace between identical characters cause text to be displayed in boldface mode. This option overrides the -u and -U options, so that display of backspaces can be controlled separate from tabs and carriage returns. If not set, backspace display is controlled by the -u and -U options. --PROC-BACKSPACE If set, backspaces are handled as if the -U option were set; that is backspaces are treated as control characters. --proc-return If set, carriage returns are handled as if neither the -u option nor the -U option were set. That is, a carriage return immediately before a newline is deleted. This option overrides the -u and -U options, so that display of carriage returns can be controlled separate from that of backspaces and tabs. If not set, carriage return display is controlled by the -u and -U options. --PROC-RETURN If set, carriage returns are handled as if the -U option were set; that is carriage returns are treated as control characters. --proc-tab If set, tabs are handled as if the -U option were not set. That is, tabs are expanded to spaces. This option overrides the -U option, so that display of tabs can be controlled separate from that of backspaces and carriage returns. If not set, tab display is controlled by the -U options. --PROC-TAB If set, tabs are handled as if the -U option were set; that is tabs are treated as control characters. --redraw-on-quit When quitting, after sending the terminal deinitialization string, redraws the entire last screen. On terminals whose terminal deinitialization string causes the terminal to switch from an alternate screen, this makes the last screenful of the current file remain visible after less has quit. --rscroll=c This option changes the character used to mark truncated lines. It may begin with a two-character attribute indicator like LESSBINFMT does. If there is no attribute indicator, standout is used. If set to "-", truncated lines are not marked. --save-marks Save marks in the history file, so marks are retained across different invocations of less. --search-options=... Sets default search modifiers. The value is a string of one or more of the characters E, F, K, N, R or W. Setting any of these has the same effect as typing that control character at the beginning of every search pattern. For example, setting --search-options=W is the same as typing ^W at the beginning of every pattern. The value may also contain a digit between 1 and 5, which has the same effect as typing ^S followed by that digit at the beginning of every search pattern. The value "-" disables all default search modifiers. --show-preproc-errors If a preprocessor produces data, then exits with a non- zero exit code, less will display a warning. --status-col-width=n Sets the width of the status column when the -J option is in effect. The default is 2 characters. --status-line If a line is marked, the entire line (rather than just the status column) is highlighted. Also lines highlighted due to the -w option will have the entire line highlighted. If --use-color is set, the line is colored rather than highlighted. --use-backslash This option changes the interpretations of options which follow this one. After the --use-backslash option, any backslash in an option string is removed and the following character is taken literally. This allows a dollar sign to be included in option strings. --use-color Enables colored text in various places. The -D option can be used to change the colors. Colored text works only if the terminal supports ANSI color escape sequences (as defined in ECMA-48 SGR; see https://www.ecma-international.org/publications-and-standards/standards/ecma-48). --wheel-lines=n Set the number of lines to scroll when the mouse wheel is scrolled and the --mouse or --MOUSE option is in effect. The default is 1 line. --wordwrap When the -S option is not in use, wrap each line at a space or tab if possible, so that a word is not split between two lines. The default is to wrap at any character. -- A command line argument of "--" marks the end of option arguments. Any arguments following this are interpreted as filenames. This can be useful when viewing a file whose name begins with a "-" or "+". + If a command line option begins with +, the remainder of that option is taken to be an initial command to less. For example, +G tells less to start at the end of the file rather than the beginning, and +/xyz tells it to start at the first occurrence of "xyz" in the file. As a special case, +<number> acts like +<number>g; that is, it starts the display at the specified line number (however, see the caveat under the "g" command above). If the option starts with ++, the initial command applies to every file being viewed, not just the first one. The + command described previously may also be used to set (or change) an initial command for every file. | # less
> Open a file for interactive reading, allowing scrolling and search. More
> information: https://greenwoodsoftware.com/less/.
* Open a file:
`less {{source_file}}`
* Page down/up:
`<Space> (down), b (up)`
* Go to end/start of file:
`G (end), g (start)`
* Forward search for a string (press `n`/`N` to go to next/previous match):
`/{{something}}`
* Backward search for a string (press `n`/`N` to go to next/previous match):
`?{{something}}`
* Follow the output of the currently opened file:
`F`
* Open the current file in an editor:
`v`
* Exit:
`q` |
git-add | This command updates the index using the current content found in the working tree, to prepare the content staged for the next commit. It typically adds the current content of existing paths as a whole, but with some options it can also be used to add content with only part of the changes made to the working tree files applied, or remove paths that do not exist in the working tree anymore. The "index" holds a snapshot of the content of the working tree, and it is this snapshot that is taken as the contents of the next commit. Thus after making any changes to the working tree, and before running the commit command, you must use the add command to add any new or modified files to the index. This command can be performed multiple times before a commit. It only adds the content of the specified file(s) at the time the add command is run; if you want subsequent changes included in the next commit, then you must run git add again to add the new content to the index. The git status command can be used to obtain a summary of which files have changes that are staged for the next commit. The git add command will not add ignored files by default. If any ignored files were explicitly specified on the command line, git add will fail with a list of ignored files. Ignored files reached by directory recursion or filename globbing performed by Git (quote your globs before the shell) will be silently ignored. The git add command can be used to add ignored files with the -f (force) option. Please see git-commit(1) for alternative ways to add content to a commit. <pathspec>... Files to add content from. Fileglobs (e.g. *.c) can be given to add all matching files. Also a leading directory name (e.g. dir to add dir/file1 and dir/file2) can be given to update the index to match the current state of the directory as a whole (e.g. specifying dir will record not just a file dir/file1 modified in the working tree, a file dir/file2 added to the working tree, but also a file dir/file3 removed from the working tree). Note that older versions of Git used to ignore removed files; use --no-all option if you want to add modified or new files but ignore removed ones. For more details about the <pathspec> syntax, see the pathspec entry in gitglossary(7). -n, --dry-run Don’t actually add the file(s), just show if they exist and/or will be ignored. -v, --verbose Be verbose. -f, --force Allow adding otherwise ignored files. --sparse Allow updating index entries outside of the sparse-checkout cone. Normally, git add refuses to update index entries whose paths do not fit within the sparse-checkout cone, since those files might be removed from the working tree without warning. See git-sparse-checkout(1) for more details. -i, --interactive Add modified contents in the working tree interactively to the index. Optional path arguments may be supplied to limit operation to a subset of the working tree. See “Interactive mode” for details. -p, --patch Interactively choose hunks of patch between the index and the work tree and add them to the index. This gives the user a chance to review the difference before adding modified contents to the index. This effectively runs add --interactive, but bypasses the initial command menu and directly jumps to the patch subcommand. See “Interactive mode” for details. -e, --edit Open the diff vs. the index in an editor and let the user edit it. After the editor was closed, adjust the hunk headers and apply the patch to the index. The intent of this option is to pick and choose lines of the patch to apply, or even to modify the contents of lines to be staged. This can be quicker and more flexible than using the interactive hunk selector. However, it is easy to confuse oneself and create a patch that does not apply to the index. See EDITING PATCHES below. -u, --update Update the index just where it already has an entry matching <pathspec>. This removes as well as modifies index entries to match the working tree, but adds no new files. If no <pathspec> is given when -u option is used, all tracked files in the entire working tree are updated (old versions of Git used to limit the update to the current directory and its subdirectories). -A, --all, --no-ignore-removal Update the index not only where the working tree has a file matching <pathspec> but also where the index already has an entry. This adds, modifies, and removes index entries to match the working tree. If no <pathspec> is given when -A option is used, all files in the entire working tree are updated (old versions of Git used to limit the update to the current directory and its subdirectories). --no-all, --ignore-removal Update the index by adding new files that are unknown to the index and files modified in the working tree, but ignore files that have been removed from the working tree. This option is a no-op when no <pathspec> is used. This option is primarily to help users who are used to older versions of Git, whose "git add <pathspec>..." was a synonym for "git add --no-all <pathspec>...", i.e. ignored removed files. -N, --intent-to-add Record only the fact that the path will be added later. An entry for the path is placed in the index with no content. This is useful for, among other things, showing the unstaged content of such files with git diff and committing them with git commit -a. --refresh Don’t add the file(s), but only refresh their stat() information in the index. --ignore-errors If some files could not be added because of errors indexing them, do not abort the operation, but continue adding the others. The command shall still exit with non-zero status. The configuration variable add.ignoreErrors can be set to true to make this the default behaviour. --ignore-missing This option can only be used together with --dry-run. By using this option the user can check if any of the given files would be ignored, no matter if they are already present in the work tree or not. --no-warn-embedded-repo By default, git add will warn when adding an embedded repository to the index without using git submodule add to create an entry in .gitmodules. This option will suppress the warning (e.g., if you are manually performing operations on submodules). --renormalize Apply the "clean" process freshly to all tracked files to forcibly add them again to the index. This is useful after changing core.autocrlf configuration or the text attribute in order to correct files added with wrong CRLF/LF line endings. This option implies -u. Lone CR characters are untouched, thus while a CRLF cleans to LF, a CRCRLF sequence is only partially cleaned to CRLF. --chmod=(+|-)x Override the executable bit of the added files. The executable bit is only changed in the index, the files on disk are left unchanged. --pathspec-from-file=<file> Pathspec is passed in <file> instead of commandline args. If <file> is exactly - then standard input is used. Pathspec elements are separated by LF or CR/LF. Pathspec elements can be quoted as explained for the configuration variable core.quotePath (see git-config(1)). See also --pathspec-file-nul and global --literal-pathspecs. --pathspec-file-nul Only meaningful with --pathspec-from-file. Pathspec elements are separated with NUL character and all other characters are taken literally (including newlines and quotes). -- This option can be used to separate command-line options from the list of files, (useful when filenames might be mistaken for command-line options). | # git add
> Adds changed files to the index. More information: https://git-
> scm.com/docs/git-add.
* Add a file to the index:
`git add {{path/to/file}}`
* Add all files (tracked and untracked):
`git add -A`
* Only add already tracked files:
`git add -u`
* Also add ignored files:
`git add -f`
* Interactively stage parts of files:
`git add -p`
* Interactively stage parts of a given file:
`git add -p {{path/to/file}}`
* Interactively stage a file:
`git add -i` |
indent | This man page is generated from the file indent.texinfo. This is Edition of "The indent Manual", for Indent Version , last updated . The indent program can be used to make code easier to read. It can also convert from one style of writing C to another. indent understands a substantial amount about the syntax of C, but it also attempts to cope with incomplete and misformed syntax. In version 1.2 and more recent versions, the GNU style of indenting is the default. -as, --align-with-spaces If using tabs for indentation, use spaces for alignment. See INDENTATION. -bad, --blank-lines-after-declarations Force blank lines after the declarations. See BLANK LINES. -bap, --blank-lines-after-procedures Force blank lines after procedure bodies. See BLANK LINES. -bbb, --blank-lines-before-block-comments Force blank lines before block comments. See BLANK LINES. -bbo, --break-before-boolean-operator Prefer to break long lines before boolean operators. See BREAKING LONG LINES. -bc, --blank-lines-after-commas Force newline after comma in declaration. See DECLARATIONS. -bl, --braces-after-if-line Put braces on line after if, etc. See STATEMENTS. -blf, --braces-after-func-def-line Put braces on line following function definition line. See DECLARATIONS. -blin, --brace-indentn Indent braces n spaces. See STATEMENTS. -bls, --braces-after-struct-decl-line Put braces on the line after struct declaration lines. See DECLARATIONS. -br, --braces-on-if-line Put braces on line with if, etc. See STATEMENTS. -brf, --braces-on-func-def-line Put braces on function definition line. See DECLARATIONS. -brs, --braces-on-struct-decl-line Put braces on struct declaration line. See DECLARATIONS. -bs, --Bill-Shannon, --blank-before-sizeof Put a space between sizeof and its argument. See STATEMENTS. -cn, --comment-indentationn Put comments to the right of code in column n. See COMMENTS. -cbin, --case-brace-indentationn Indent braces after a case label N spaces. See STATEMENTS. -cdn, --declaration-comment-columnn Put comments to the right of the declarations in column n. See COMMENTS. -cdb, --comment-delimiters-on-blank-lines Put comment delimiters on blank lines. See COMMENTS. -cdw, --cuddle-do-while Cuddle while of do {} while; and preceding ‘}’. See COMMENTS. -ce, --cuddle-else Cuddle else and preceding ‘}’. See COMMENTS. -cin, --continuation-indentationn Continuation indent of n spaces. See STATEMENTS. -clin, --case-indentationn Case label indent of n spaces. See STATEMENTS. -cpn, --else-endif-columnn Put comments to the right of #else and #endif statements in column n. See COMMENTS. -cs, --space-after-cast Put a space after a cast operator. See STATEMENTS. -dn, --line-comments-indentationn Set indentation of comments not to the right of code to n spaces. See COMMENTS. -bfda, --break-function-decl-args Break the line before all arguments in a declaration. See DECLARATIONS. -bfde, --break-function-decl-args-end Break the line after the last argument in a declaration. See DECLARATIONS. -dj, --left-justify-declarations If -cd 0 is used then comments after declarations are left justified behind the declaration. See DECLARATIONS. -din, --declaration-indentationn Put variables in column n. See DECLARATIONS. -fc1, --format-first-column-comments Format comments in the first column. See COMMENTS. -fca, --format-all-comments Do not disable all formatting of comments. See COMMENTS. -fnc, --fix-nested-comments Fix nested comments. See COMMENTS. -gnu, --gnu-style Use GNU coding style. This is the default. See COMMON STYLES. -gts, --gettext-strings Treat gettext _("...") and N_("...") as strings rather than as functions. See BREAKING LONG LINES. -hnl, --honour-newlines Prefer to break long lines at the position of newlines in the input. See BREAKING LONG LINES. -in, --indent-leveln Set indentation level to n spaces. See INDENTATION. -iln, --indent-labeln Set offset for labels to column n. See INDENTATION. -ipn, --parameter-indentationn Indent parameter types in old-style function definitions by n spaces. See INDENTATION. -kr, --k-and-r-style Use Kernighan & Ritchie coding style. See COMMON STYLES. -ln, --line-lengthn Set maximum line length for non-comment lines to n. See BREAKING LONG LINES. -lcn, --comment-line-lengthn Set maximum line length for comment formatting to n. See COMMENTS. -linux, --linux-style Use Linux coding style. See COMMON STYLES. -lp, --continue-at-parentheses Line up continued lines at parentheses. See INDENTATION. -lps, --leave-preprocessor-space Leave space between ‘#’ and preprocessor directive. See INDENTATION. -nlps, --remove-preprocessor-space Remove space between ‘#’ and preprocessor directive. See INDENTATION. -nbad, --no-blank-lines-after-declarations Do not force blank lines after declarations. See BLANK LINES. -nbap, --no-blank-lines-after-procedures Do not force blank lines after procedure bodies. See BLANK LINES. -nbbo, --break-after-boolean-operator Do not prefer to break long lines before boolean operators. See BREAKING LONG LINES. -nbc, --no-blank-lines-after-commas Do not force newlines after commas in declarations. See DECLARATIONS. -nbfda, --dont-break-function-decl-args Don’t put each argument in a function declaration on a separate line. See DECLARATIONS. -ncdb, --no-comment-delimiters-on-blank-lines Do not put comment delimiters on blank lines. See COMMENTS. -ncdw, --dont-cuddle-do-while Do not cuddle } and the while of a do {} while;. See STATEMENTS. -nce, --dont-cuddle-else Do not cuddle } and else. See STATEMENTS. -ncs, --no-space-after-casts Do not put a space after cast operators. See STATEMENTS. -ndjn, --dont-left-justify-declarations Comments after declarations are treated the same as comments after other statements. See DECLARATIONS. -nfc1, --dont-format-first-column-comments Do not format comments in the first column as normal. See COMMENTS. -nfca, --dont-format-comments Do not format any comments. See COMMENTS. -ngts, --no-gettext-strings Treat gettext _("...") and N_("...") as normal functions. This is the default. See BREAKING LONG LINES. -nhnl, --ignore-newlines Do not prefer to break long lines at the position of newlines in the input. See BREAKING LONG LINES. -nip, --no-parameter-indentation Zero width indentation for parameters. See INDENTATION. -nlp, --dont-line-up-parentheses Do not line up parentheses. See STATEMENTS. -npcs, --no-space-after-function-call-names Do not put space after the function in function calls. See STATEMENTS. -nprs, --no-space-after-parentheses Do not put a space after every ’(’ and before every ’)’. See STATEMENTS. -npsl, --dont-break-procedure-type Put the type of a procedure on the same line as its name. See DECLARATIONS. -nsaf, --no-space-after-for Do not put a space after every for. See STATEMENTS. -nsai, --no-space-after-if Do not put a space after every if. See STATEMENTS. -nsaw, --no-space-after-while Do not put a space after every while. See STATEMENTS. -nsc, --dont-star-comments Do not put the ‘*’ character at the left of comments. See COMMENTS. -nsob, --leave-optional-blank-lines Do not swallow optional blank lines. See BLANK LINES. -nss, --dont-space-special-semicolon Do not force a space before the semicolon after certain statements. Disables ‘-ss’. See STATEMENTS. -ntac, --dont-tab-align-comments Do not pad comments out to the nearest tabstop. See COMMENTS. -nut, --no-tabs Use spaces instead of tabs. See INDENTATION. -nv, --no-verbosity Disable verbose mode. See MISCELLANEOUS OPTIONS. -orig, --original Use the original Berkeley coding style. See COMMON STYLES. -npro, --ignore-profile Do not read ‘.indent.pro’ files. See INVOKING INDENT. -pal, --pointer-align-left Put asterisks in pointer declarations on the left of spaces, next to types: ‘‘char* p’’. -par, --pointer-align-right Put asterisks in pointer declarations on the right of spaces, next to variable names: ‘‘char *p’’. This is the default behavior. -pcs, --space-after-procedure-calls Insert a space between the name of the procedure being called and the ‘(’. See STATEMENTS. -pin, --paren-indentationn Specify the extra indentation per open parentheses ’(’ when a statement is broken.See STATEMENTS. -pmt, --preserve-mtime Preserve access and modification times on output files.See MISCELLANEOUS OPTIONS. -ppin, --preprocessor-indentationn Specify the indentation for preprocessor conditional statements.See INDENTATION. -prs, --space-after-parentheses Put a space after every ’(’ and before every ’)’. See STATEMENTS. -psl, --procnames-start-lines Put the type of a procedure on the line before its name. See DECLARATIONS. -saf, --space-after-for Put a space after each for. See STATEMENTS. -sai, --space-after-if Put a space after each if. See STATEMENTS. -sar, --spaces-around-initializers Put a space after the ‘{’ and before the ‘}’ in initializers. See DECLARATIONS. -saw, --space-after-while Put a space after each while. See STATEMENTS. -sbin, --struct-brace-indentationn Indent braces of a struct, union or enum N spaces. See STATEMENTS. -sc, --start-left-side-of-comments Put the ‘*’ character at the left of comments. See COMMENTS. -slc, --single-line-conditionals Allow for unbraced conditionals (if, else, etc.) to have their inner statement on the same line. See STATEMENTS. -sob, --swallow-optional-blank-lines Swallow optional blank lines. See BLANK LINES. -ss, --space-special-semicolon On one-line for and while statements, force a blank before the semicolon. See STATEMENTS. -st, --standard-output Write to standard output. See INVOKING INDENT. -T Tell indent the name of typenames. See DECLARATIONS. -tsn, --tab-sizen Set tab size to n spaces. See INDENTATION. -ut, --use-tabs Use tabs. This is the default. See INDENTATION. -v, --verbose Enable verbose mode. See MISCELLANEOUS OPTIONS. -version Output the version number of indent. See MISCELLANEOUS OPTIONS. | # indent
> Change the appearance of a C/C++ program by inserting or deleting
> whitespace. More information:
> https://www.freebsd.org/cgi/man.cgi?query=indent.
* Format C/C++ source according to the Berkeley style:
`indent {{path/to/source_file.c}} {{path/to/indented_file.c}} -nbad -nbap -bc
-br -c33 -cd33 -cdb -ce -ci4 -cli0 -di16 -fc1 -fcb -i4 -ip -l75 -lp -npcs
-nprs -psl -sc -nsob -ts8`
* Format C/C++ source according to the style of Kernighan & Ritchie (K&R):
`indent {{path/to/source_file.c}} {{path/to/indented_file.c}} -nbad -bap -nbc
-br -c33 -cd33 -ncdb -ce -ci4 -cli0 -cs -d0 -di1 -nfc1 -nfcb -i4 -nip -l75 -lp
-npcs -nprs -npsl -nsc -nsob` |
stty | Print or change terminal characteristics. Mandatory arguments to long options are mandatory for short options too. -a, --all print all current settings in human-readable form -g, --save print all current settings in a stty-readable form -F, --file=DEVICE open and use the specified DEVICE instead of stdin --help display this help and exit --version output version information and exit Optional - before SETTING indicates negation. An * marks non-POSIX settings. The underlying system defines which settings are available. Special characters: * discard CHAR CHAR will toggle discarding of output eof CHAR CHAR will send an end of file (terminate the input) eol CHAR CHAR will end the line * eol2 CHAR alternate CHAR for ending the line erase CHAR CHAR will erase the last character typed intr CHAR CHAR will send an interrupt signal kill CHAR CHAR will erase the current line * lnext CHAR CHAR will enter the next character quoted quit CHAR CHAR will send a quit signal * rprnt CHAR CHAR will redraw the current line start CHAR CHAR will restart the output after stopping it stop CHAR CHAR will stop the output susp CHAR CHAR will send a terminal stop signal * swtch CHAR CHAR will switch to a different shell layer * werase CHAR CHAR will erase the last word typed Special settings: N set the input and output speeds to N bauds * cols N tell the kernel that the terminal has N columns * columns N same as cols N * [-]drain wait for transmission before applying settings (on by default) ispeed N set the input speed to N * line N use line discipline N min N with -icanon, set N characters minimum for a completed read ospeed N set the output speed to N * rows N tell the kernel that the terminal has N rows * size print the number of rows and columns according to the kernel speed print the terminal speed time N with -icanon, set read timeout of N tenths of a second Control settings: [-]clocal disable modem control signals [-]cread allow input to be received * [-]crtscts enable RTS/CTS handshaking csN set character size to N bits, N in [5..8] [-]cstopb use two stop bits per character (one with '-') [-]hup send a hangup signal when the last process closes the tty [-]hupcl same as [-]hup [-]parenb generate parity bit in output and expect parity bit in input [-]parodd set odd parity (or even parity with '-') * [-]cmspar use "stick" (mark/space) parity Input settings: [-]brkint breaks cause an interrupt signal [-]icrnl translate carriage return to newline [-]ignbrk ignore break characters [-]igncr ignore carriage return [-]ignpar ignore characters with parity errors * [-]imaxbel beep and do not flush a full input buffer on a character [-]inlcr translate newline to carriage return [-]inpck enable input parity checking [-]istrip clear high (8th) bit of input characters * [-]iutf8 assume input characters are UTF-8 encoded * [-]iuclc translate uppercase characters to lowercase * [-]ixany let any character restart output, not only start character [-]ixoff enable sending of start/stop characters [-]ixon enable XON/XOFF flow control [-]parmrk mark parity errors (with a 255-0-character sequence) [-]tandem same as [-]ixoff Output settings: * bsN backspace delay style, N in [0..1] * crN carriage return delay style, N in [0..3] * ffN form feed delay style, N in [0..1] * nlN newline delay style, N in [0..1] * [-]ocrnl translate carriage return to newline * [-]ofdel use delete characters for fill instead of NUL characters * [-]ofill use fill (padding) characters instead of timing for delays * [-]olcuc translate lowercase characters to uppercase * [-]onlcr translate newline to carriage return-newline * [-]onlret newline performs a carriage return * [-]onocr do not print carriage returns in the first column [-]opost postprocess output * tabN horizontal tab delay style, N in [0..3] * tabs same as tab0 * -tabs same as tab3 * vtN vertical tab delay style, N in [0..1] Local settings: [-]crterase echo erase characters as backspace-space-backspace * crtkill kill all line by obeying the echoprt and echoe settings * -crtkill kill all line by obeying the echoctl and echok settings * [-]ctlecho echo control characters in hat notation ('^c') [-]echo echo input characters * [-]echoctl same as [-]ctlecho [-]echoe same as [-]crterase [-]echok echo a newline after a kill character * [-]echoke same as [-]crtkill [-]echonl echo newline even if not echoing other characters * [-]echoprt echo erased characters backward, between '\' and '/' * [-]extproc enable "LINEMODE"; useful with high latency links * [-]flusho discard output [-]icanon enable special characters: erase, kill, werase, rprnt [-]iexten enable non-POSIX special characters [-]isig enable interrupt, quit, and suspend special characters [-]noflsh disable flushing after interrupt and quit special characters * [-]prterase same as [-]echoprt * [-]tostop stop background jobs that try to write to the terminal * [-]xcase with icanon, escape with '\' for uppercase characters Combination settings: * [-]LCASE same as [-]lcase cbreak same as -icanon -cbreak same as icanon cooked same as brkint ignpar istrip icrnl ixon opost isig icanon, eof and eol characters to their default values -cooked same as raw crt same as echoe echoctl echoke dec same as echoe echoctl echoke -ixany intr ^c erase 0177 kill ^u * [-]decctlq same as [-]ixany ek erase and kill characters to their default values evenp same as parenb -parodd cs7 -evenp same as -parenb cs8 * [-]lcase same as xcase iuclc olcuc litout same as -parenb -istrip -opost cs8 -litout same as parenb istrip opost cs7 nl same as -icrnl -onlcr -nl same as icrnl -inlcr -igncr onlcr -ocrnl -onlret oddp same as parenb parodd cs7 -oddp same as -parenb cs8 [-]parity same as [-]evenp pass8 same as -parenb -istrip cs8 -pass8 same as parenb istrip cs7 raw same as -ignbrk -brkint -ignpar -parmrk -inpck -istrip -inlcr -igncr -icrnl -ixon -ixoff -icanon -opost -isig -iuclc -ixany -imaxbel -xcase min 1 time 0 -raw same as cooked sane same as cread -ignbrk brkint -inlcr -igncr icrnl icanon iexten echo echoe echok -echonl -noflsh -ixoff -iutf8 -iuclc -ixany imaxbel -xcase -olcuc -ocrnl opost -ofill onlcr -onocr -onlret nl0 cr0 tab0 bs0 vt0 ff0 isig -tostop -ofdel -echoprt echoctl echoke -extproc -flusho, all special characters to their default values Handle the tty line connected to standard input. Without arguments, prints baud rate, line discipline, and deviations from stty sane. In settings, CHAR is taken literally, or coded as in ^c, 0x37, 0177 or 127; special values ^- or undef used to disable special characters. | # stty
> Set options for a terminal device interface. More information:
> https://www.gnu.org/software/coreutils/stty.
* Display all settings for the current terminal:
`stty --all`
* Set the number of rows or columns:
`stty {{rows|cols}} {{count}}`
* Get the actual transfer speed of a device:
`stty --file {{path/to/device_file}} speed`
* Reset all modes to reasonable values for the current terminal:
`stty sane` |
git-column | This command formats the lines of its standard input into a table with multiple columns. Each input line occupies one cell of the table. It is used internally by other git commands to format output into columns. --command=<name> Look up layout mode using configuration variable column.<name> and column.ui. --mode=<mode> Specify layout mode. See configuration variable column.ui for option syntax in git-config(1). --raw-mode=<n> Same as --mode but take mode encoded as a number. This is mainly used by other commands that have already parsed layout mode. --width=<width> Specify the terminal width. By default git column will detect the terminal width, or fall back to 80 if it is unable to do so. --indent=<string> String to be printed at the beginning of each line. --nl=<string> String to be printed at the end of each line, including newline character. --padding=<N> The number of spaces between columns. One space by default. | # git column
> Display data in columns. More information: https://git-scm.com/docs/git-
> column.
* Format `stdin` as multiple columns:
`ls | git column --mode={{column}}`
* Format `stdin` as multiple columns with a maximum width of `100`:
`ls | git column --mode=column --width={{100}}`
* Format `stdin` as multiple columns with a maximum padding of `30`:
`ls | git column --mode=column --padding={{30}}` |
who | Print information about users who are currently logged in. -a, --all same as -b -d --login -p -r -t -T -u -b, --boot time of last system boot -d, --dead print dead processes -H, --heading print line of column headings -l, --login print system login processes --lookup attempt to canonicalize hostnames via DNS -m only hostname and user associated with stdin -p, --process print active processes spawned by init -q, --count all login names and number of users logged on -r, --runlevel print current runlevel -s, --short print only name, line, and time (default) -t, --time print last system clock change -T, -w, --mesg add user's message status as +, - or ? -u, --users list users logged in --message same as -T --writable same as -T --help display this help and exit --version output version information and exit If FILE is not specified, use /var/run/utmp. /var/log/wtmp as FILE is common. If ARG1 ARG2 given, -m presumed: 'am i' or 'mom likes' are usual. | # who
> Display who is logged in and related data (processes, boot time). More
> information: https://www.gnu.org/software/coreutils/who.
* Display the username, line, and time of all currently logged-in sessions:
`who`
* Display information only for the current terminal session:
`who am i`
* Display all available information:
`who -a`
* Display all available information with table headers:
`who -a -H` |
git-notes | Adds, removes, or reads notes attached to objects, without touching the objects themselves. By default, notes are saved to and read from refs/notes/commits, but this default can be overridden. See the OPTIONS, CONFIGURATION, and ENVIRONMENT sections below. If this ref does not exist, it will be quietly created when it is first needed to store a note. A typical use of notes is to supplement a commit message without changing the commit itself. Notes can be shown by git log along with the original commit message. To distinguish these notes from the message stored in the commit object, the notes are indented like the message, after an unindented line saying "Notes (<refname>):" (or "Notes:" for refs/notes/commits). Notes can also be added to patches prepared with git format-patch by using the --notes option. Such notes are added as a patch commentary after a three dash separator line. To change which notes are shown by git log, see the "notes.displayRef" discussion in the section called “CONFIGURATION”. See the "notes.rewrite.<command>" configuration for a way to carry notes across commands that rewrite commits. -f, --force When adding notes to an object that already has notes, overwrite the existing notes (instead of aborting). -m <msg>, --message=<msg> Use the given note message (instead of prompting). If multiple -m options are given, their values are concatenated as separate paragraphs. Lines starting with # and empty lines other than a single line between paragraphs will be stripped out. -F <file>, --file=<file> Take the note message from the given file. Use - to read the note message from the standard input. Lines starting with # and empty lines other than a single line between paragraphs will be stripped out. -C <object>, --reuse-message=<object> Take the given blob object (for example, another note) as the note message. (Use git notes copy <object> instead to copy notes between objects.) -c <object>, --reedit-message=<object> Like -C, but with -c the editor is invoked, so that the user can further edit the note message. --allow-empty Allow an empty note object to be stored. The default behavior is to automatically remove empty notes. --ref <ref> Manipulate the notes tree in <ref>. This overrides GIT_NOTES_REF and the "core.notesRef" configuration. The ref specifies the full refname when it begins with refs/notes/; when it begins with notes/, refs/ and otherwise refs/notes/ is prefixed to form a full name of the ref. --ignore-missing Do not consider it an error to request removing notes from an object that does not have notes attached to it. --stdin Also read the object names to remove notes from the standard input (there is no reason you cannot combine this with object names from the command line). -n, --dry-run Do not remove anything; just report the object names whose notes would be removed. -s <strategy>, --strategy=<strategy> When merging notes, resolve notes conflicts using the given strategy. The following strategies are recognized: "manual" (default), "ours", "theirs", "union" and "cat_sort_uniq". This option overrides the "notes.mergeStrategy" configuration setting. See the "NOTES MERGE STRATEGIES" section below for more information on each notes merge strategy. --commit Finalize an in-progress git notes merge. Use this option when you have resolved the conflicts that git notes merge stored in .git/NOTES_MERGE_WORKTREE. This amends the partial merge commit created by git notes merge (stored in .git/NOTES_MERGE_PARTIAL) by adding the notes in .git/NOTES_MERGE_WORKTREE. The notes ref stored in the .git/NOTES_MERGE_REF symref is updated to the resulting commit. --abort Abort/reset an in-progress git notes merge, i.e. a notes merge with conflicts. This simply removes all files related to the notes merge. -q, --quiet When merging notes, operate quietly. -v, --verbose When merging notes, be more verbose. When pruning notes, report all object names whose notes are removed. | # git notes
> Add or inspect object notes. More information: https://git-scm.com/docs/git-
> notes.
* List all notes and the objects they are attached to:
`git notes list`
* List all notes attached to a given object (defaults to HEAD):
`git notes list [{{object}}]`
* Show the notes attached to a given object (defaults to HEAD):
`git notes show [{{object}}]`
* Append a note to a specified object (opens the default text editor):
`git notes append {{object}}`
* Append a note to a specified object, specifying the message:
`git notes append --message="{{message_text}}"`
* Edit an existing note (defaults to HEAD):
`git notes edit [{{object}}]`
* Copy a note from one object to another:
`git notes copy {{source_object}} {{target_object}}`
* Remove all the notes added to a specified object:
`git notes remove {{object}}` |
git-mv | Move or rename a file, directory or symlink. git mv [-v] [-f] [-n] [-k] <source> <destination> git mv [-v] [-f] [-n] [-k] <source> ... <destination directory> In the first form, it renames <source>, which must exist and be either a file, symlink or directory, to <destination>. In the second form, the last argument has to be an existing directory; the given sources will be moved into this directory. The index is updated after successful completion, but the change must still be committed. -f, --force Force renaming or moving of a file even if the <destination> exists. -k Skip move or rename actions which would lead to an error condition. An error happens when a source is neither existing nor controlled by Git, or when it would overwrite an existing file unless -f is given. -n, --dry-run Do nothing; only show what would happen -v, --verbose Report the names of files as they are moved. | # git mv
> Move or rename files and update the Git index. More information:
> https://git-scm.com/docs/git-mv.
* Move a file inside the repo and add the movement to the next commit:
`git mv {{path/to/file}} {{new/path/to/file}}`
* Rename a file or directory and add the renaming to the next commit:
`git mv {{path/to/file_or_directory}} {{path/to/destination}}`
* Overwrite the file or directory in the target path if it exists:
`git mv --force {{path/to/file_or_directory}} {{path/to/destination}}` |
strip | GNU strip discards all symbols from object files objfile. The list of object files may include archives. At least one object file must be given. strip modifies the files named in its argument, rather than writing modified copies under different names. -F bfdname --target=bfdname Treat the original objfile as a file with the object code format bfdname, and rewrite it in the same format. --help Show a summary of the options to strip and exit. --info Display a list showing all architectures and object formats available. -I bfdname --input-target=bfdname Treat the original objfile as a file with the object code format bfdname. -O bfdname --output-target=bfdname Replace objfile with a file in the output format bfdname. -R sectionname --remove-section=sectionname Remove any section named sectionname from the output file, in addition to whatever sections would otherwise be removed. This option may be given more than once. Note that using this option inappropriately may make the output file unusable. The wildcard character * may be given at the end of sectionname. If so, then any section starting with sectionname will be removed. If the first character of sectionpattern is the exclamation point (!) then matching sections will not be removed even if an earlier use of --remove-section on the same command line would otherwise remove it. For example: --remove-section=.text.* --remove-section=!.text.foo will remove all sections matching the pattern '.text.*', but will not remove the section '.text.foo'. --keep-section=sectionpattern When removing sections from the output file, keep sections that match sectionpattern. --remove-relocations=sectionpattern Remove relocations from the output file for any section matching sectionpattern. This option may be given more than once. Note that using this option inappropriately may make the output file unusable. Wildcard characters are accepted in sectionpattern. For example: --remove-relocations=.text.* will remove the relocations for all sections matching the patter '.text.*'. If the first character of sectionpattern is the exclamation point (!) then matching sections will not have their relocation removed even if an earlier use of --remove-relocations on the same command line would otherwise cause the relocations to be removed. For example: --remove-relocations=.text.* --remove-relocations=!.text.foo will remove all relocations for sections matching the pattern '.text.*', but will not remove relocations for the section '.text.foo'. -s --strip-all Remove all symbols. -g -S -d --strip-debug Remove debugging symbols only. --strip-dwo Remove the contents of all DWARF .dwo sections, leaving the remaining debugging sections and all symbols intact. See the description of this option in the objcopy section for more information. --strip-unneeded Remove all symbols that are not needed for relocation processing in addition to debugging symbols and sections stripped by --strip-debug. -K symbolname --keep-symbol=symbolname When stripping symbols, keep symbol symbolname even if it would normally be stripped. This option may be given more than once. -M --merge-notes --no-merge-notes For ELF files, attempt (or do not attempt) to reduce the size of any SHT_NOTE type sections by removing duplicate notes. The default is to attempt this reduction unless stripping debug or DWO information. -N symbolname --strip-symbol=symbolname Remove symbol symbolname from the source file. This option may be given more than once, and may be combined with strip options other than -K. -o file Put the stripped output in file, rather than replacing the existing file. When this argument is used, only one objfile argument may be specified. -p --preserve-dates Preserve the access and modification dates of the file. -D --enable-deterministic-archives Operate in deterministic mode. When copying archive members and writing the archive index, use zero for UIDs, GIDs, timestamps, and use consistent file modes for all files. If binutils was configured with --enable-deterministic-archives, then this mode is on by default. It can be disabled with the -U option, below. -U --disable-deterministic-archives Do not operate in deterministic mode. This is the inverse of the -D option, above: when copying archive members and writing the archive index, use their actual UID, GID, timestamp, and file mode values. This is the default unless binutils was configured with --enable-deterministic-archives. -w --wildcard Permit regular expressions in symbolnames used in other command line options. The question mark (?), asterisk (*), backslash (\) and square brackets ([]) operators can be used anywhere in the symbol name. If the first character of the symbol name is the exclamation point (!) then the sense of the switch is reversed for that symbol. For example: -w -K !foo -K fo* would cause strip to only keep symbols that start with the letters "fo", but to discard the symbol "foo". -x --discard-all Remove non-global symbols. -X --discard-locals Remove compiler-generated local symbols. (These usually start with L or ..) --keep-section-symbols When stripping a file, perhaps with --strip-debug or --strip-unneeded, retain any symbols specifying section names, which would otherwise get stripped. --keep-file-symbols When stripping a file, perhaps with --strip-debug or --strip-unneeded, retain any symbols specifying source file names, which would otherwise get stripped. --only-keep-debug Strip a file, emptying the contents of any sections that would not be stripped by --strip-debug and leaving the debugging sections intact. In ELF files, this preserves all the note sections in the output as well. Note - the section headers of the stripped sections are preserved, including their sizes, but the contents of the section are discarded. The section headers are preserved so that other tools can match up the debuginfo file with the real executable, even if that executable has been relocated to a different address space. The intention is that this option will be used in conjunction with --add-gnu-debuglink to create a two part executable. One a stripped binary which will occupy less space in RAM and in a distribution and the second a debugging information file which is only needed if debugging abilities are required. The suggested procedure to create these files is as follows: 1.<Link the executable as normal. Assuming that it is called> "foo" then... 1.<Run "objcopy --only-keep-debug foo foo.dbg" to> create a file containing the debugging info. 1.<Run "objcopy --strip-debug foo" to create a> stripped executable. 1.<Run "objcopy --add-gnu-debuglink=foo.dbg foo"> to add a link to the debugging info into the stripped executable. Note---the choice of ".dbg" as an extension for the debug info file is arbitrary. Also the "--only-keep-debug" step is optional. You could instead do this: 1.<Link the executable as normal.> 1.<Copy "foo" to "foo.full"> 1.<Run "strip --strip-debug foo"> 1.<Run "objcopy --add-gnu-debuglink=foo.full foo"> i.e., the file pointed to by the --add-gnu-debuglink can be the full executable. It does not have to be a file created by the --only-keep-debug switch. Note---this switch is only intended for use on fully linked files. It does not make sense to use it on object files where the debugging information may be incomplete. Besides the gnu_debuglink feature currently only supports the presence of one filename containing debugging information, not multiple filenames on a one-per-object-file basis. -V --version Show the version number for strip. -v --verbose Verbose output: list all object files modified. In the case of archives, strip -v lists all members of the archive. @file Read command-line options from file. The options read are inserted in place of the original @file option. If file does not exist, or cannot be read, then the option will be treated literally, and not removed. Options in file are separated by whitespace. A whitespace character may be included in an option by surrounding the entire option in either single or double quotes. Any character (including a backslash) may be included by prefixing the character to be included with a backslash. The file may itself contain additional @file options; any such options will be processed recursively. | # strip
> Discard symbols from executables or object files. More information:
> https://manned.org/strip.
* Replace the input file with its stripped version:
`strip {{path/to/file}}`
* Strip symbols from a file, saving the output to a specific file:
`strip {{path/to/input_file}} -o {{path/to/output_file}}`
* Strip debug symbols only:
`strip --strip-debug {{path/to/file.o}}` |
bash | Bash is an sh-compatible command language interpreter that executes commands read from the standard input or from a file. Bash also incorporates useful features from the Korn and C shells (ksh and csh). Bash is intended to be a conformant implementation of the Shell and Utilities portion of the IEEE POSIX specification (IEEE Standard 1003.1). Bash can be configured to be POSIX-conformant by default. All of the single-character shell options documented in the description of the set builtin command, including -o, can be used as options when the shell is invoked. In addition, bash interprets the following options when it is invoked: -c If the -c option is present, then commands are read from the first non-option argument command_string. If there are arguments after the command_string, the first argument is assigned to $0 and any remaining arguments are assigned to the positional parameters. The assignment to $0 sets the name of the shell, which is used in warning and error messages. -i If the -i option is present, the shell is interactive. -l Make bash act as if it had been invoked as a login shell (see INVOCATION below). -r If the -r option is present, the shell becomes restricted (see RESTRICTED SHELL below). -s If the -s option is present, or if no arguments remain after option processing, then commands are read from the standard input. This option allows the positional parameters to be set when invoking an interactive shell or when reading input through a pipe. -D A list of all double-quoted strings preceded by $ is printed on the standard output. These are the strings that are subject to language translation when the current locale is not C or POSIX. This implies the -n option; no commands will be executed. [-+]O [shopt_option] shopt_option is one of the shell options accepted by the shopt builtin (see SHELL BUILTIN COMMANDS below). If shopt_option is present, -O sets the value of that option; +O unsets it. If shopt_option is not supplied, the names and values of the shell options accepted by shopt are printed on the standard output. If the invocation option is +O, the output is displayed in a format that may be reused as input. -- A -- signals the end of options and disables further option processing. Any arguments after the -- are treated as filenames and arguments. An argument of - is equivalent to --. Bash also interprets a number of multi-character options. These options must appear on the command line before the single- character options to be recognized. --debugger Arrange for the debugger profile to be executed before the shell starts. Turns on extended debugging mode (see the description of the extdebug option to the shopt builtin below). --dump-po-strings Equivalent to -D, but the output is in the GNU gettext po (portable object) file format. --dump-strings Equivalent to -D. --help Display a usage message on standard output and exit successfully. --init-file file --rcfile file Execute commands from file instead of the standard personal initialization file ~/.bashrc if the shell is interactive (see INVOCATION below). --login Equivalent to -l. --noediting Do not use the GNU readline library to read command lines when the shell is interactive. --noprofile Do not read either the system-wide startup file /etc/profile or any of the personal initialization files ~/.bash_profile, ~/.bash_login, or ~/.profile. By default, bash reads these files when it is invoked as a login shell (see INVOCATION below). --norc Do not read and execute the personal initialization file ~/.bashrc if the shell is interactive. This option is on by default if the shell is invoked as sh. --posix Change the behavior of bash where the default operation differs from the POSIX standard to match the standard (posix mode). See SEE ALSO below for a reference to a document that details how posix mode affects bash's behavior. --restricted The shell becomes restricted (see RESTRICTED SHELL below). --verbose Equivalent to -v. --version Show version information for this instance of bash on the standard output and exit successfully. | # bash
> Bourne-Again SHell, an `sh`-compatible command-line interpreter. See also:
> `zsh`, `histexpand` (history expansion). More information:
> https://gnu.org/software/bash/.
* Start an interactive shell session:
`bash`
* Start an interactive shell session without loading startup configs:
`bash --norc`
* Execute specific [c]ommands:
`bash -c "{{echo 'bash is executed'}}"`
* Execute a specific script:
`bash {{path/to/script.sh}}`
* Execute a specific script while printing each command before executing it:
`bash -x {{path/to/script.sh}}`
* Execute a specific script and stop at the first [e]rror:
`bash -e {{path/to/script.sh}}`
* Execute specific commands from `stdin`:
`{{echo "echo 'bash is executed'"}} | bash`
* Start a [r]estricted shell session:
`bash -r` |
exit | The exit utility shall cause the shell to exit from its current execution environment with the exit status specified by the unsigned decimal integer n. If the current execution environment is a subshell environment, the shell shall exit from the subshell environment with the specified exit status and continue in the environment from which that subshell environment was invoked; otherwise, the shell utility shall terminate with the specified exit status. If n is specified, but its value is not between 0 and 255 inclusively, the exit status is undefined. A trap on EXIT shall be executed before the shell terminates, except when the exit utility is invoked in that trap itself, in which case the shell shall exit immediately. None. | # exit
> Exit the shell. More information: https://manned.org/exit.
* Exit the shell with the exit code of the last command executed:
`exit`
* Exit the shell with the specified exit code:
`exit {{exit_code}}` |
uniq | The uniq utility shall read an input file comparing adjacent lines, and write one copy of each input line on the output. The second and succeeding copies of repeated adjacent input lines shall not be written. The trailing <newline> of each line in the input shall be ignored when doing comparisons. Repeated lines in the input shall not be detected if they are not adjacent. The uniq utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines, except that '+' may be recognized as an option delimiter as well as '-'. The following options shall be supported: -c Precede each output line with a count of the number of times the line occurred in the input. -d Suppress the writing of lines that are not repeated in the input. -f fields Ignore the first fields fields on each input line when doing comparisons, where fields is a positive decimal integer. A field is the maximal string matched by the basic regular expression: [[:blank:]]*[^[:blank:]]* If the fields option-argument specifies more fields than appear on an input line, a null string shall be used for comparison. -s chars Ignore the first chars characters when doing comparisons, where chars shall be a positive decimal integer. If specified in conjunction with the -f option, the first chars characters after the first fields fields shall be ignored. If the chars option- argument specifies more characters than remain on an input line, a null string shall be used for comparison. -u Suppress the writing of lines that are repeated in the input. | # uniq
> Output the unique lines from the given input or file. Since it does not
> detect repeated lines unless they are adjacent, we need to sort them first.
> More information: https://www.gnu.org/software/coreutils/uniq.
* Display each line once:
`sort {{path/to/file}} | uniq`
* Display only unique lines:
`sort {{path/to/file}} | uniq -u`
* Display only duplicate lines:
`sort {{path/to/file}} | uniq -d`
* Display number of occurrences of each line along with that line:
`sort {{path/to/file}} | uniq -c`
* Display number of occurrences of each line, sorted by the most frequent:
`sort {{path/to/file}} | uniq -c | sort -nr` |
git-mailinfo | Reads a single e-mail message from the standard input, and writes the commit log message in <msg> file, and the patches in <patch> file. The author name, e-mail and e-mail subject are written out to the standard output to be used by git am to create a commit. It is usually not necessary to use this command directly. See git-am(1) instead. -k Usually the program removes email cruft from the Subject: header line to extract the title line for the commit log message. This option prevents this munging, and is most useful when used to read back git format-patch -k output. Specifically, the following are removed until none of them remain: • Leading and trailing whitespace. • Leading Re:, re:, and :. • Leading bracketed strings (between [ and ], usually [PATCH]). Finally, runs of whitespace are normalized to a single ASCII space character. -b When -k is not in effect, all leading strings bracketed with [ and ] pairs are stripped. This option limits the stripping to only the pairs whose bracketed string contains the word "PATCH". -u The commit log message, author name and author email are taken from the e-mail, and after minimally decoding MIME transfer encoding, re-coded in the charset specified by i18n.commitEncoding (defaulting to UTF-8) by transliterating them. This used to be optional but now it is the default. Note that the patch is always used as-is without charset conversion, even with this flag. --encoding=<encoding> Similar to -u. But when re-coding, the charset specified here is used instead of the one specified by i18n.commitEncoding or UTF-8. -n Disable all charset re-coding of the metadata. -m, --message-id Copy the Message-ID header at the end of the commit message. This is useful in order to associate commits with mailing list discussions. --scissors Remove everything in body before a scissors line (e.g. "-- >8 --"). The line represents scissors and perforation marks, and is used to request the reader to cut the message at that line. If that line appears in the body of the message before the patch, everything before it (including the scissors line itself) is ignored when this option is used. This is useful if you want to begin your message in a discussion thread with comments and suggestions on the message you are responding to, and to conclude it with a patch submission, separating the discussion and the beginning of the proposed commit log message with a scissors line. This can be enabled by default with the configuration option mailinfo.scissors. --no-scissors Ignore scissors lines. Useful for overriding mailinfo.scissors settings. --quoted-cr=<action> Action when processes email messages sent with base64 or quoted-printable encoding, and the decoded lines end with a CRLF instead of a simple LF. The valid actions are: • nowarn: Git will do nothing when such a CRLF is found. • warn: Git will issue a warning for each message if such a CRLF is found. • strip: Git will convert those CRLF to LF. The default action could be set by configuration option mailinfo.quotedCR. If no such configuration option has been set, warn will be used. <msg> The commit log message extracted from e-mail, usually except the title line which comes from e-mail Subject. <patch> The patch extracted from e-mail. | # git mailinfo
> Extract patch and authorship information from a single email message. More
> information: https://git-scm.com/docs/git-mailinfo.
* Extract the patch and author data from an email message:
`git mailinfo {{message|patch}}`
* Extract but remove leading and trailing whitespace:
`git mailinfo -k {{message|patch}}`
* Remove everything from the body before a scissors line (e.g. "-->* --") and retrieve the message or patch:
`git mailinfo --scissors {{message|patch}}` |
git-annotate | Annotates each line in the given file with information from the commit which introduced the line. Optionally annotates from a given revision. The only difference between this command and git-blame(1) is that they use slightly different output formats, and this command exists only for backward compatibility to support existing scripts, and provide a more familiar command name for people coming from other SCM systems. -b Show blank SHA-1 for boundary commits. This can also be controlled via the blame.blankBoundary config option. --root Do not treat root commits as boundaries. This can also be controlled via the blame.showRoot config option. --show-stats Include additional statistics at the end of blame output. -L <start>,<end>, -L :<funcname> Annotate only the line range given by <start>,<end>, or by the function name regex <funcname>. May be specified multiple times. Overlapping ranges are allowed. <start> and <end> are optional. -L <start> or -L <start>, spans from <start> to end of file. -L ,<end> spans from start of file to <end>. <start> and <end> can take one of these forms: • number If <start> or <end> is a number, it specifies an absolute line number (lines count from 1). • /regex/ This form will use the first line matching the given POSIX regex. If <start> is a regex, it will search from the end of the previous -L range, if any, otherwise from the start of file. If <start> is ^/regex/, it will search from the start of file. If <end> is a regex, it will search starting at the line given by <start>. • +offset or -offset This is only valid for <end> and will specify a number of lines before or after the line given by <start>. If :<funcname> is given in place of <start> and <end>, it is a regular expression that denotes the range from the first funcname line that matches <funcname>, up to the next funcname line. :<funcname> searches from the end of the previous -L range, if any, otherwise from the start of file. ^:<funcname> searches from the start of file. The function names are determined in the same way as git diff works out patch hunk headers (see Defining a custom hunk-header in gitattributes(5)). -l Show long rev (Default: off). -t Show raw timestamp (Default: off). -S <revs-file> Use revisions from revs-file instead of calling git-rev-list(1). --reverse <rev>..<rev> Walk history forward instead of backward. Instead of showing the revision in which a line appeared, this shows the last revision in which a line has existed. This requires a range of revision like START..END where the path to blame exists in START. git blame --reverse START is taken as git blame --reverse START..HEAD for convenience. --first-parent Follow only the first parent commit upon seeing a merge commit. This option can be used to determine when a line was introduced to a particular integration branch, rather than when it was introduced to the history overall. -p, --porcelain Show in a format designed for machine consumption. --line-porcelain Show the porcelain format, but output commit information for each line, not just the first time a commit is referenced. Implies --porcelain. --incremental Show the result incrementally in a format designed for machine consumption. --encoding=<encoding> Specifies the encoding used to output author names and commit summaries. Setting it to none makes blame output unconverted data. For more information see the discussion about encoding in the git-log(1) manual page. --contents <file> Annotate using the contents from the named file, starting from <rev> if it is specified, and HEAD otherwise. You may specify - to make the command read from the standard input for the file contents. --date <format> Specifies the format used to output dates. If --date is not provided, the value of the blame.date config variable is used. If the blame.date config variable is also not set, the iso format is used. For supported values, see the discussion of the --date option at git-log(1). --[no-]progress Progress status is reported on the standard error stream by default when it is attached to a terminal. This flag enables progress reporting even if not attached to a terminal. Can’t use --progress together with --porcelain or --incremental. -M[<num>] Detect moved or copied lines within a file. When a commit moves or copies a block of lines (e.g. the original file has A and then B, and the commit changes it to B and then A), the traditional blame algorithm notices only half of the movement and typically blames the lines that were moved up (i.e. B) to the parent and assigns blame to the lines that were moved down (i.e. A) to the child commit. With this option, both groups of lines are blamed on the parent by running extra passes of inspection. <num> is optional but it is the lower bound on the number of alphanumeric characters that Git must detect as moving/copying within a file for it to associate those lines with the parent commit. The default value is 20. -C[<num>] In addition to -M, detect lines moved or copied from other files that were modified in the same commit. This is useful when you reorganize your program and move code around across files. When this option is given twice, the command additionally looks for copies from other files in the commit that creates the file. When this option is given three times, the command additionally looks for copies from other files in any commit. <num> is optional but it is the lower bound on the number of alphanumeric characters that Git must detect as moving/copying between files for it to associate those lines with the parent commit. And the default value is 40. If there are more than one -C options given, the <num> argument of the last -C will take effect. --ignore-rev <rev> Ignore changes made by the revision when assigning blame, as if the change never happened. Lines that were changed or added by an ignored commit will be blamed on the previous commit that changed that line or nearby lines. This option may be specified multiple times to ignore more than one revision. If the blame.markIgnoredLines config option is set, then lines that were changed by an ignored commit and attributed to another commit will be marked with a ? in the blame output. If the blame.markUnblamableLines config option is set, then those lines touched by an ignored commit that we could not attribute to another revision are marked with a *. --ignore-revs-file <file> Ignore revisions listed in file, which must be in the same format as an fsck.skipList. This option may be repeated, and these files will be processed after any files specified with the blame.ignoreRevsFile config option. An empty file name, "", will clear the list of revs from previously processed files. --color-lines Color line annotations in the default format differently if they come from the same commit as the preceding line. This makes it easier to distinguish code blocks introduced by different commits. The color defaults to cyan and can be adjusted using the color.blame.repeatedLines config option. --color-by-age Color line annotations depending on the age of the line in the default format. The color.blame.highlightRecent config option controls what color is used for each range of age. -h Show help message. | # git annotate
> Show commit hash and last author on each line of a file. See `git blame`,
> which is preferred over `git annotate`. `git annotate` is provided for those
> familiar with other version control systems. More information: https://git-
> scm.com/docs/git-annotate.
* Print a file with the author name and commit hash prepended to each line:
`git annotate {{path/to/file}}`
* Print a file with the author email and commit hash prepended to each line:
`git annotate -e {{path/to/file}}`
* Print only rows that match a regular expression:
`git annotate -L :{{regexp}} {{path/to/file}}` |
pstree | pstree shows running processes as a tree. The tree is rooted at either pid or init if pid is omitted. If a user name is specified, all process trees rooted at processes owned by that user are shown. pstree visually merges identical branches by putting them in square brackets and prefixing them with the repetition count, e.g. init-+-getty |-getty |-getty `-getty becomes init---4*[getty] Child threads of a process are found under the parent process and are shown with the process name in curly braces, e.g. icecast2---13*[{icecast2}] If pstree is called as pstree.x11 then it will prompt the user at the end of the line to press return and will not return until that has happened. This is useful for when pstree is run in a xterminal. Certain kernel or mount parameters, such as the hidepid option for procfs, will hide information for some processes. In these situations pstree will attempt to build the tree without this information, showing process names as question marks. -a Show command line arguments. If the command line of a process is swapped out, that process is shown in parentheses. -a implicitly disables compaction for processes but not threads. -A Use ASCII characters to draw the tree. -c Disable compaction of identical subtrees. By default, subtrees are compacted whenever possible. -C Color the process name by given attribute. Currently pstree only accepts the value age which colors by process age. Processes newer than 60 seconds are green, newer than an hour yellow and the remaining red. -g Show PGIDs. Process Group IDs are shown as decimal numbers in parentheses after each process name. If both PIDs and PGIDs are displayed then PIDs are shown first. -G Use VT100 line drawing characters. -h Highlight the current process and its ancestors. This is a no-op if the terminal doesn't support highlighting or if neither the current process nor any of its ancestors are in the subtree being shown. -H Like -h, but highlight the specified process instead. Unlike with -h, pstree fails when using -H if highlighting is not available. -l Display long lines. By default, lines are truncated to either the COLUMNS environment variable or the display width. If neither of these methods work, the default of 132 columns is used. -n Sort processes with the same parent by PID instead of by name. (Numeric sort.) -N Show individual trees for each namespace of the type specified. The available types are: ipc, mnt, net, pid, time, user, uts. Regular users don't have access to other users' processes information, so the output will be limited. -p Show PIDs. PIDs are shown as decimal numbers in parentheses after each process name. -p implicitly disables compaction. -s Show parent processes of the specified process. -S Show namespaces transitions. Like -N, the output is limited when running as a regular user. -t Show full names for threads when available. -T Hide threads and only show processes. -u Show uid transitions. Whenever the uid of a process differs from the uid of its parent, the new uid is shown in parentheses after the process name. -U Use UTF-8 (Unicode) line drawing characters. Under Linux 1.1-54 and above, UTF-8 mode is entered on the console with echo -e ' 33%8' and left with echo -e ' 33%@'. -V Display version information. -Z Show the current security attributes of the process. For SELinux systems this will be the security context. | # pstree
> A convenient tool to show running processes as a tree. More information:
> https://manned.org/pstree.
* Display a tree of processes:
`pstree`
* Display a tree of processes with PIDs:
`pstree -p`
* Display all process trees rooted at processes owned by specified user:
`pstree {{user}}` |
ac | ac prints out a report of connect time (in hours) based on the logins/logouts in the current wtmp file. A total is also printed out. The accounting file wtmp is maintained by init(8) and login(1). Neither ac nor login creates the wtmp if it doesn't exist, no accounting is done. To begin accounting, create the file with a length of zero. NOTE: The wtmp file can get really big, really fast. You might want to trim it every once and a while. GNU ac works nearly the same UNIX ac, though it's a little smarter in several ways. You should therefore expect differences in the output of GNU ac and the output of ac's on other systems. Use the command info accounting to get additional information. -d, --daily-totals Print totals for each day rather than just one big total at the end. The output looks like this: Jul 3 total 1.17 Jul 4 total 2.10 Jul 5 total 8.23 Jul 6 total 2.10 Jul 7 total 0.30 -p, --individual-totals Print time totals for each user in addition to the usual everything-lumped-into-one value. It looks like: bob 8.06 goff 0.60 maley 7.37 root 0.12 total 16.15 people Print out the sum total of the connect time used by all of the users included in people. Note that people is a space separated list of valid user names; wildcards are not allowed. -f, --file filename Read from the file filename instead of the system's wtmp file. --complain When the wtmp file has a problem (a time-warp, missing record, or whatever), print out an appropriate error. --reboots Reboot records are NOT written at the time of a reboot, but when the system restarts; therefore, it is impossible to know exactly when the reboot occurred. Users may have been logged into the system at the time of the reboot, and many ac's automatically count the time between the login and the reboot record against the user (even though all of that time shouldn't be, perhaps, if the system is down for a long time, for instance). If you want to count this time, include the flag. *For vanilla ac compatibility, include this flag.* --supplants Sometimes, a logout record is not written for a specific terminal, so the time that the last user accrued cannot be calculated. If you want to include the time from the user's login to the next login on the terminal (though probably incorrect), include this you want to include the time from the user's login to the next login on the terminal (though probably incorrect), include this flag. *For vanilla ac compatibility, include this flag.* --timewarps Sometimes, entries in a wtmp file will suddenly jump back into the past without a clock change record occurring. It is impossible to know how long a user was logged in when this occurs. If you want to count the time between the login and the time warp against the user, include this flag. *For vanilla ac compatibility, include this flag.* --compatibility This is shorthand for typing out the three above options. -a, --all-days If we're printing daily totals, print a record for every day instead of skipping intervening days where there is no login activity. Without this flag, time accrued during those intervening days gets listed under the next day where there is login activity. --tw-leniency num Set the time warp leniency to num seconds. Records in wtmp files might be slightly out of order (most notably when two logins occur within a one-second period - the second one gets written first). By default, this value is set to 60. If the program notices this problem, time is not assigned to users unless the --timewarps flag is used. --tw-suspicious num Set the time warp suspicious value to num seconds. If two records in the wtmp file are farther than this number of seconds apart, there is a problem with the wtmp file (or your machine hasn't been used in a year). If the program notices this problem, time is not assigned to users unless the --timewarps flag is used. -y, --print-year Print year when displaying dates. -z, --print-zeros If a total for any category (save the grand total) is zero, print it. The default is to suppress printing. --debug Print verbose internal information. -V, --version Print the version number of ac to standard output and quit. -h, --help Prints the usage string and default locations of system files to standard output and exits. | # ac
> Print statistics on how long users have been connected. More information:
> https://man.openbsd.org/ac.
* Print how long the current user has been connected in hours:
`ac`
* Print how long users have been connected in hours:
`ac -p`
* Print how long a particular user has been connected in hours:
`ac -p {{username}}`
* Print how long a particular user has been connected in hours per day (with total):
`ac -dp {{username}}` |
sha1sum | Print or check SHA1 (160-bit) checksums. With no FILE, or when FILE is -, read standard input. -b, --binary read in binary mode -c, --check read checksums from the FILEs and check them --tag create a BSD-style checksum -t, --text read in text mode (default) -z, --zero end each output line with NUL, not newline, and disable file name escaping The following five options are useful only when verifying checksums: --ignore-missing don't fail or report status for missing files --quiet don't print OK for each successfully verified file --status don't output anything, status code shows success --strict exit non-zero for improperly formatted checksum lines -w, --warn warn about improperly formatted checksum lines --help display this help and exit --version output version information and exit The sums are computed as described in FIPS-180-1. When checking, the input should be a former output of this program. The default mode is to print a line with: checksum, a space, a character indicating input mode ('*' for binary, ' ' for text or where binary is insignificant), and name for each FILE. Note: There is no difference between binary mode and text mode on GNU systems. | # sha1sum
> Calculate SHA1 cryptographic checksums. More information:
> https://www.gnu.org/software/coreutils/sha1sum.
* Calculate the SHA1 checksum for one or more files:
`sha1sum {{path/to/file1 path/to/file2 ...}}`
* Calculate and save the list of SHA1 checksums to a file:
`sha1sum {{path/to/file1 path/to/file2 ...}} > {{path/to/file.sha1}}`
* Calculate a SHA1 checksum from `stdin`:
`{{command}} | sha1sum`
* Read a file of SHA1 sums and filenames and verify all files have matching checksums:
`sha1sum --check {{path/to/file.sha1}}`
* Only show a message for missing files or when verification fails:
`sha1sum --check --quiet {{path/to/file.sha1}}`
* Only show a message when verification fails, ignoring missing files:
`sha1sum --ignore-missing --check --quiet {{path/to/file.sha1}}` |
ed | The ed utility is a line-oriented text editor that uses two modes: command mode and input mode. In command mode the input characters shall be interpreted as commands, and in input mode they shall be interpreted as text. See the EXTENDED DESCRIPTION section. If an operand is '-', the results are unspecified. The ed utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines, except for the unspecified usage of '-'. The following options shall be supported: -p string Use string as the prompt string when in command mode. By default, there shall be no prompt string. -s Suppress the writing of byte counts by e, E, r, and w commands and of the '!' prompt after a !command. | # ed
> The original Unix text editor. See also: `awk`, `sed`. More information:
> https://www.gnu.org/software/ed/manual/ed_manual.html.
* Start an interactive editor session with an empty document:
`ed`
* Start an interactive editor session with an empty document and a specific [p]rompt:
`ed -p '> '`
* Start an interactive editor session with an empty document and without diagnostics, byte counts and '!' prompt:
`ed -s`
* Edit a specific file (this shows the byte count of the loaded file):
`ed {{path/to/file}}`
* Replace a string with a specific replacement for all lines:
`,s/{{regular_expression}}/{{replacement}}/g` |
systemd-analyze | systemd-analyze may be used to determine system boot-up performance statistics and retrieve other state and tracing information from the system and service manager, and to verify the correctness of unit files. It is also used to access special functions useful for advanced system manager debugging. If no command is passed, systemd-analyze time is implied. systemd-analyze time This command prints the time spent in the kernel before userspace has been reached, the time spent in the initrd before normal system userspace has been reached, and the time normal system userspace took to initialize. Note that these measurements simply measure the time passed up to the point where all system services have been spawned, but not necessarily until they fully finished initialization or the disk is idle. Example 1. Show how long the boot took # in a container $ systemd-analyze time Startup finished in 296ms (userspace) multi-user.target reached after 275ms in userspace # on a real machine $ systemd-analyze time Startup finished in 2.584s (kernel) + 19.176s (initrd) + 47.847s (userspace) = 1min 9.608s multi-user.target reached after 47.820s in userspace systemd-analyze blame This command prints a list of all running units, ordered by the time they took to initialize. This information may be used to optimize boot-up times. Note that the output might be misleading as the initialization of one service might be slow simply because it waits for the initialization of another service to complete. Also note: systemd-analyze blame doesn't display results for services with Type=simple, because systemd considers such services to be started immediately, hence no measurement of the initialization delays can be done. Also note that this command only shows the time units took for starting up, it does not show how long unit jobs spent in the execution queue. In particular it shows the time units spent in "activating" state, which is not defined for units such as device units that transition directly from "inactive" to "active". This command hence gives an impression of the performance of program code, but cannot accurately reflect latency introduced by waiting for hardware and similar events. Example 2. Show which units took the most time during boot $ systemd-analyze blame 32.875s pmlogger.service 20.905s systemd-networkd-wait-online.service 13.299s dev-vda1.device ... 23ms sysroot.mount 11ms initrd-udevadm-cleanup-db.service 3ms sys-kernel-config.mount systemd-analyze critical-chain [UNIT...] This command prints a tree of the time-critical chain of units (for each of the specified UNITs or for the default target otherwise). The time after the unit is active or started is printed after the "@" character. The time the unit takes to start is printed after the "+" character. Note that the output might be misleading as the initialization of services might depend on socket activation and because of the parallel execution of units. Also, similarly to the blame command, this only takes into account the time units spent in "activating" state, and hence does not cover units that never went through an "activating" state (such as device units that transition directly from "inactive" to "active"). Moreover it does not show information on jobs (and in particular not jobs that timed out). Example 3. systemd-analyze critical-chain $ systemd-analyze critical-chain multi-user.target @47.820s └─pmie.service @35.968s +548ms └─pmcd.service @33.715s +2.247s └─network-online.target @33.712s └─systemd-networkd-wait-online.service @12.804s +20.905s └─systemd-networkd.service @11.109s +1.690s └─systemd-udevd.service @9.201s +1.904s └─systemd-tmpfiles-setup-dev.service @7.306s +1.776s └─kmod-static-nodes.service @6.976s +177ms └─systemd-journald.socket └─system.slice └─-.slice systemd-analyze dump [pattern...] Without any parameter, this command outputs a (usually very long) human-readable serialization of the complete service manager state. Optional glob pattern may be specified, causing the output to be limited to units whose names match one of the patterns. The output format is subject to change without notice and should not be parsed by applications. This command is rate limited for unprivileged users. Example 4. Show the internal state of user manager $ systemd-analyze --user dump Timestamp userspace: Thu 2019-03-14 23:28:07 CET Timestamp finish: Thu 2019-03-14 23:28:07 CET Timestamp generators-start: Thu 2019-03-14 23:28:07 CET Timestamp generators-finish: Thu 2019-03-14 23:28:07 CET Timestamp units-load-start: Thu 2019-03-14 23:28:07 CET Timestamp units-load-finish: Thu 2019-03-14 23:28:07 CET -> Unit proc-timer_list.mount: Description: /proc/timer_list ... -> Unit default.target: Description: Main user target ... systemd-analyze malloc [D-Bus service...] This command can be used to request the output of the internal memory state (as returned by malloc_info(3)) of a D-Bus service. If no service is specified, the query will be sent to org.freedesktop.systemd1 (the system or user service manager). The output format is not guaranteed to be stable and should not be parsed by applications. The service must implement the org.freedesktop.MemoryAllocation1 interface. In the systemd suite, it is currently only implemented by the manager. systemd-analyze plot This command prints either an SVG graphic, detailing which system services have been started at what time, highlighting the time they spent on initialization, or the raw time data in JSON or table format. Example 5. Plot a bootchart $ systemd-analyze plot >bootup.svg $ eog bootup.svg& Note that this plot is based on the most recent per-unit timing data of loaded units. This means that if a unit gets started, then stopped and then started again the information shown will cover the most recent start cycle, not the first one. Thus it's recommended to consult this information only shortly after boot, so that this distinction doesn't matter. Moreover, units that are not referenced by any other unit through a dependency might be unloaded by the service manager once they terminate (and did not fail). Such units will not show up in the plot. systemd-analyze dot [pattern...] This command generates textual dependency graph description in dot format for further processing with the GraphViz dot(1) tool. Use a command line like systemd-analyze dot | dot -Tsvg >systemd.svg to generate a graphical dependency tree. Unless --order or --require is passed, the generated graph will show both ordering and requirement dependencies. Optional pattern globbing style specifications (e.g. *.target) may be given at the end. A unit dependency is included in the graph if any of these patterns match either the origin or destination node. Example 6. Plot all dependencies of any unit whose name starts with "avahi-daemon" $ systemd-analyze dot 'avahi-daemon.*' | dot -Tsvg >avahi.svg $ eog avahi.svg Example 7. Plot the dependencies between all known target units $ systemd-analyze dot --to-pattern='*.target' --from-pattern='*.target' \ | dot -Tsvg >targets.svg $ eog targets.svg systemd-analyze unit-paths This command outputs a list of all directories from which unit files, .d overrides, and .wants, .requires symlinks may be loaded. Combine with --user to retrieve the list for the user manager instance, and --global for the global configuration of user manager instances. Example 8. Show all paths for generated units $ systemd-analyze unit-paths | grep '^/run' /run/systemd/system.control /run/systemd/transient /run/systemd/generator.early /run/systemd/system /run/systemd/system.attached /run/systemd/generator /run/systemd/generator.late Note that this verb prints the list that is compiled into systemd-analyze itself, and does not communicate with the running manager. Use systemctl [--user] [--global] show -p UnitPath --value to retrieve the actual list that the manager uses, with any empty directories omitted. systemd-analyze exit-status [STATUS...] This command prints a list of exit statuses along with their "class", i.e. the source of the definition (one of "glibc", "systemd", "LSB", or "BSD"), see the Process Exit Codes section in systemd.exec(5). If no additional arguments are specified, all known statuses are shown. Otherwise, only the definitions for the specified codes are shown. Example 9. Show some example exit status names $ systemd-analyze exit-status 0 1 {63..65} NAME STATUS CLASS SUCCESS 0 glibc FAILURE 1 glibc - 63 - USAGE 64 BSD DATAERR 65 BSD systemd-analyze capability [CAPABILITY...] This command prints a list of Linux capabilities along with their numeric IDs. See capabilities(7) for details. If no argument is specified the full list of capabilities known to the service manager and the kernel is shown. Capabilities defined by the kernel but not known to the service manager are shown as "cap_???". Optionally, if arguments are specified they may refer to specific cabilities by name or numeric ID, in which case only the indicated capabilities are shown in the table. Example 10. Show some example capability names $ systemd-analyze capability 0 1 {30..32} NAME NUMBER cap_chown 0 cap_dac_override 1 cap_audit_control 30 cap_setfcap 31 cap_mac_override 32 systemd-analyze condition CONDITION... This command will evaluate Condition*=... and Assert*=... assignments, and print their values, and the resulting value of the combined condition set. See systemd.unit(5) for a list of available conditions and asserts. Example 11. Evaluate conditions that check kernel versions $ systemd-analyze condition 'ConditionKernelVersion = ! <4.0' \ 'ConditionKernelVersion = >=5.1' \ 'ConditionACPower=|false' \ 'ConditionArchitecture=|!arm' \ 'AssertPathExists=/etc/os-release' test.service: AssertPathExists=/etc/os-release succeeded. Asserts succeeded. test.service: ConditionArchitecture=|!arm succeeded. test.service: ConditionACPower=|false failed. test.service: ConditionKernelVersion=>=5.1 succeeded. test.service: ConditionKernelVersion=!<4.0 succeeded. Conditions succeeded. systemd-analyze syscall-filter [SET...] This command will list system calls contained in the specified system call set SET, or all known sets if no sets are specified. Argument SET must include the "@" prefix. systemd-analyze filesystems [SET...] This command will list filesystems in the specified filesystem set SET, or all known sets if no sets are specified. Argument SET must include the "@" prefix. systemd-analyze calendar EXPRESSION... This command will parse and normalize repetitive calendar time events, and will calculate when they elapse next. This takes the same input as the OnCalendar= setting in systemd.timer(5), following the syntax described in systemd.time(7). By default, only the next time the calendar expression will elapse is shown; use --iterations= to show the specified number of next times the expression elapses. Each time the expression elapses forms a timestamp, see the timestamp verb below. Example 12. Show leap days in the near future $ systemd-analyze calendar --iterations=5 '*-2-29 0:0:0' Original form: *-2-29 0:0:0 Normalized form: *-02-29 00:00:00 Next elapse: Sat 2020-02-29 00:00:00 UTC From now: 11 months 15 days left Iter. #2: Thu 2024-02-29 00:00:00 UTC From now: 4 years 11 months left Iter. #3: Tue 2028-02-29 00:00:00 UTC From now: 8 years 11 months left Iter. #4: Sun 2032-02-29 00:00:00 UTC From now: 12 years 11 months left Iter. #5: Fri 2036-02-29 00:00:00 UTC From now: 16 years 11 months left systemd-analyze timestamp TIMESTAMP... This command parses a timestamp (i.e. a single point in time) and outputs the normalized form and the difference between this timestamp and now. The timestamp should adhere to the syntax documented in systemd.time(7), section "PARSING TIMESTAMPS". Example 13. Show parsing of timestamps $ systemd-analyze timestamp yesterday now tomorrow Original form: yesterday Normalized form: Mon 2019-05-20 00:00:00 CEST (in UTC): Sun 2019-05-19 22:00:00 UTC UNIX seconds: @15583032000 From now: 1 day 9h ago Original form: now Normalized form: Tue 2019-05-21 09:48:39 CEST (in UTC): Tue 2019-05-21 07:48:39 UTC UNIX seconds: @1558424919.659757 From now: 43us ago Original form: tomorrow Normalized form: Wed 2019-05-22 00:00:00 CEST (in UTC): Tue 2019-05-21 22:00:00 UTC UNIX seconds: @15584760000 From now: 14h left systemd-analyze timespan EXPRESSION... This command parses a time span (i.e. a difference between two timestamps) and outputs the normalized form and the equivalent value in microseconds. The time span should adhere to the syntax documented in systemd.time(7), section "PARSING TIME SPANS". Values without units are parsed as seconds. Example 14. Show parsing of timespans $ systemd-analyze timespan 1s 300s '1year 0.000001s' Original: 1s μs: 1000000 Human: 1s Original: 300s μs: 300000000 Human: 5min Original: 1year 0.000001s μs: 31557600000001 Human: 1y 1us systemd-analyze cat-config NAME|PATH... This command is similar to systemctl cat, but operates on config files. It will copy the contents of a config file and any drop-ins to standard output, using the usual systemd set of directories and rules for precedence. Each argument must be either an absolute path including the prefix (such as /etc/systemd/logind.conf or /usr/lib/systemd/logind.conf), or a name relative to the prefix (such as systemd/logind.conf). Example 15. Showing logind configuration $ systemd-analyze cat-config systemd/logind.conf # /etc/systemd/logind.conf ... [Login] NAutoVTs=8 ... # /usr/lib/systemd/logind.conf.d/20-test.conf ... some override from another package # /etc/systemd/logind.conf.d/50-override.conf ... some administrator override systemd-analyze compare-versions VERSION1 [OP] VERSION2 This command has two distinct modes of operation, depending on whether the operator OP is specified. In the first mode — when OP is not specified — it will compare the two version strings and print either "VERSION1 < VERSION2", or "VERSION1 == VERSION2", or "VERSION1 > VERSION2" as appropriate. The exit status is 0 if the versions are equal, 11 if the version of the right is smaller, and 12 if the version of the left is smaller. (This matches the convention used by rpmdev-vercmp.) In the second mode — when OP is specified — it will compare the two version strings using the operation OP and return 0 (success) if they condition is satisfied, and 1 (failure) otherwise. OP may be lt, le, eq, ne, ge, gt. In this mode, no output is printed. (This matches the convention used by dpkg(1) --compare-versions.) Example 16. Compare versions of a package $ systemd-analyze compare-versions systemd-250~rc1.fc36.aarch64 systemd-251.fc36.aarch64 systemd-250~rc1.fc36.aarch64 < systemd-251.fc36.aarch64 $ echo $? 12 $ systemd-analyze compare-versions 1 lt 2; echo $? 0 $ systemd-analyze compare-versions 1 ge 2; echo $? 1 systemd-analyze verify FILE... This command will load unit files and print warnings if any errors are detected. Files specified on the command line will be loaded, but also any other units referenced by them. A unit's name on disk can be overridden by specifying an alias after a colon; see below for an example. The full unit search path is formed by combining the directories for all command line arguments, and the usual unit load paths. The variable $SYSTEMD_UNIT_PATH is supported, and may be used to replace or augment the compiled in set of unit load paths; see systemd.unit(5). All units files present in the directories containing the command line arguments will be used in preference to the other paths. The following errors are currently detected: • unknown sections and directives, • missing dependencies which are required to start the given unit, • man pages listed in Documentation= which are not found in the system, • commands listed in ExecStart= and similar which are not found in the system or not executable. Example 17. Misspelt directives $ cat ./user.slice [Unit] WhatIsThis=11 Documentation=man:nosuchfile(1) Requires=different.service [Service] Description=x $ systemd-analyze verify ./user.slice [./user.slice:9] Unknown lvalue 'WhatIsThis' in section 'Unit' [./user.slice:13] Unknown section 'Service'. Ignoring. Error: org.freedesktop.systemd1.LoadFailed: Unit different.service failed to load: No such file or directory. Failed to create user.slice/start: Invalid argument user.slice: man nosuchfile(1) command failed with code 16 Example 18. Missing service units $ tail ./a.socket ./b.socket ==> ./a.socket <== [Socket] ListenStream=100 ==> ./b.socket <== [Socket] ListenStream=100 Accept=yes $ systemd-analyze verify ./a.socket ./b.socket Service a.service not loaded, a.socket cannot be started. Service b@0.service not loaded, b.socket cannot be started. Example 19. Aliasing a unit $ cat /tmp/source [Unit] Description=Hostname printer [Service] Type=simple ExecStart=/usr/bin/echo %H MysteryKey=true $ systemd-analyze verify /tmp/source Failed to prepare filename /tmp/source: Invalid argument $ systemd-analyze verify /tmp/source:alias.service alias.service:7: Unknown key name 'MysteryKey' in section 'Service', ignoring. systemd-analyze security [UNIT...] This command analyzes the security and sandboxing settings of one or more specified service units. If at least one unit name is specified the security settings of the specified service units are inspected and a detailed analysis is shown. If no unit name is specified, all currently loaded, long-running service units are inspected and a terse table with results shown. The command checks for various security-related service settings, assigning each a numeric "exposure level" value, depending on how important a setting is. It then calculates an overall exposure level for the whole unit, which is an estimation in the range 0.0...10.0 indicating how exposed a service is security-wise. High exposure levels indicate very little applied sandboxing. Low exposure levels indicate tight sandboxing and strongest security restrictions. Note that this only analyzes the per-service security features systemd itself implements. This means that any additional security mechanisms applied by the service code itself are not accounted for. The exposure level determined this way should not be misunderstood: a high exposure level neither means that there is no effective sandboxing applied by the service code itself, nor that the service is actually vulnerable to remote or local attacks. High exposure levels do indicate however that most likely the service might benefit from additional settings applied to them. Please note that many of the security and sandboxing settings individually can be circumvented — unless combined with others. For example, if a service retains the privilege to establish or undo mount points many of the sandboxing options can be undone by the service code itself. Due to that is essential that each service uses the most comprehensive and strict sandboxing and security settings possible. The tool will take into account some of these combinations and relationships between the settings, but not all. Also note that the security and sandboxing settings analyzed here only apply to the operations executed by the service code itself. If a service has access to an IPC system (such as D-Bus) it might request operations from other services that are not subject to the same restrictions. Any comprehensive security and sandboxing analysis is hence incomplete if the IPC access policy is not validated too. Example 20. Analyze systemd-logind.service $ systemd-analyze security --no-pager systemd-logind.service NAME DESCRIPTION EXPOSURE ✗ PrivateNetwork= Service has access to the host's network 0.5 ✗ User=/DynamicUser= Service runs as root user 0.4 ✗ DeviceAllow= Service has no device ACL 0.2 ✓ IPAddressDeny= Service blocks all IP address ranges ... → Overall exposure level for systemd-logind.service: 4.1 OK 🙂 systemd-analyze inspect-elf FILE... This command will load the specified files, and if they are ELF objects (executables, libraries, core files, etc.) it will parse the embedded packaging metadata, if any, and print it in a table or json format. See the Packaging Metadata[1] documentation for more information. Example 21. Print information about a core file as JSON $ systemd-analyze inspect-elf --json=pretty \ core.fsverity.1000.f77dac5dc161402aa44e15b7dd9dcf97.58561.1637106137000000 { "elfType" : "coredump", "elfArchitecture" : "AMD x86-64", "/home/bluca/git/fsverity-utils/fsverity" : { "type" : "deb", "name" : "fsverity-utils", "version" : "1.3-1", "buildId" : "7c895ecd2a271f93e96268f479fdc3c64a2ec4ee" }, "/home/bluca/git/fsverity-utils/libfsverity.so.0" : { "type" : "deb", "name" : "fsverity-utils", "version" : "1.3-1", "buildId" : "b5e428254abf14237b0ae70ed85fffbb98a78f88" } } systemd-analyze fdstore [UNIT...] Lists the current contents of the specified service unit's file descriptor store. This shows names, inode types, device numbers, inode numbers, paths and open modes of the open file descriptors. The specified units must have FileDescriptorStoreMax= enabled, see systemd.service(5) for details. Example 22. Table output $ systemd-analyze fdstore systemd-journald.service FDNAME TYPE DEVNO INODE RDEVNO PATH FLAGS stored sock 0:8 4218620 - socket:[4218620] ro stored sock 0:8 4213198 - socket:[4213198] ro stored sock 0:8 4213190 - socket:[4213190] ro ... Note: the "DEVNO" column refers to the major/minor numbers of the device node backing the file system the file descriptor's inode is on. The "RDEVNO" column refers to the major/minor numbers of the device node itself if the file descriptor refers to one. Compare with corresponding .st_dev and .st_rdev fields in struct stat (see stat(2) for details). The listed inode numbers in the "INODE" column are on the file system indicated by "DEVNO". systemd-analyze image-policy [POLICY...] This command analyzes the specified image policy string, as per systemd.image-policy(7). The policy is normalized and simplified. For each currently defined partition identifier (as per the Discoverable Partitions Specification[2] the effect of the image policy string is shown in tabular form. Example 23. Example Output $ systemd-analyze image-policy swap=encrypted:usr=read-only-on+verity:root=encrypted Analyzing policy: root=encrypted:usr=verity+read-only-on:swap=encrypted Long form: root=encrypted:usr=verity+read-only-on:swap=encrypted:=unused+absent PARTITION MODE READ-ONLY GROWFS root encrypted - - usr verity yes - home ignore - - srv ignore - - esp ignore - - xbootldr ignore - - swap encrypted - - root-verity ignore - - usr-verity unprotected yes - root-verity-sig ignore - - usr-verity-sig ignore - - tmp ignore - - var ignore - - default ignore - - The following options are understood: --system Operates on the system systemd instance. This is the implied default. --user Operates on the user systemd instance. --global Operates on the system-wide configuration for user systemd instance. --order, --require When used in conjunction with the dot command (see above), selects which dependencies are shown in the dependency graph. If --order is passed, only dependencies of type After= or Before= are shown. If --require is passed, only dependencies of type Requires=, Requisite=, Wants= and Conflicts= are shown. If neither is passed, this shows dependencies of all these types. --from-pattern=, --to-pattern= When used in conjunction with the dot command (see above), this selects which relationships are shown in the dependency graph. Both options require a glob(7) pattern as an argument, which will be matched against the left-hand and the right-hand, respectively, nodes of a relationship. Each of these can be used more than once, in which case the unit name must match one of the values. When tests for both sides of the relation are present, a relation must pass both tests to be shown. When patterns are also specified as positional arguments, they must match at least one side of the relation. In other words, patterns specified with those two options will trim the list of edges matched by the positional arguments, if any are given, and fully determine the list of edges shown otherwise. --fuzz=timespan When used in conjunction with the critical-chain command (see above), also show units, which finished timespan earlier, than the latest unit in the same level. The unit of timespan is seconds unless specified with a different unit, e.g. "50ms". --man=no Do not invoke man(1) to verify the existence of man pages listed in Documentation=. --generators Invoke unit generators, see systemd.generator(7). Some generators require root privileges. Under a normal user, running with generators enabled will generally result in some warnings. --recursive-errors=MODE Control verification of units and their dependencies and whether systemd-analyze verify exits with a non-zero process exit status or not. With yes, return a non-zero process exit status when warnings arise during verification of either the specified unit or any of its associated dependencies. With no, return a non-zero process exit status when warnings arise during verification of only the specified unit. With one, return a non-zero process exit status when warnings arise during verification of either the specified unit or its immediate dependencies. If this option is not specified, zero is returned as the exit status regardless whether warnings arise during verification or not. --root=PATH With cat-files and verify, operate on files underneath the specified root path PATH. --image=PATH With cat-files and verify, operate on files inside the specified image path PATH. --image-policy=policy Takes an image policy string as argument, as per systemd.image-policy(7). The policy is enforced when operating on the disk image specified via --image=, see above. If not specified defaults to the "*" policy, i.e. all recognized file systems in the image are used. --offline=BOOL With security, perform an offline security review of the specified unit files, i.e. does not have to rely on PID 1 to acquire security information for the files like the security verb when used by itself does. This means that --offline= can be used with --root= and --image= as well. If a unit's overall exposure level is above that set by --threshold= (default value is 100), --offline= will return an error. --profile=PATH With security --offline=, takes into consideration the specified portable profile when assessing unit settings. The profile can be passed by name, in which case the well-known system locations will be searched, or it can be the full path to a specific drop-in file. --threshold=NUMBER With security, allow the user to set a custom value to compare the overall exposure level with, for the specified unit files. If a unit's overall exposure level, is greater than that set by the user, security will return an error. --threshold= can be used with --offline= as well and its default value is 100. --security-policy=PATH With security, allow the user to define a custom set of requirements formatted as a JSON file against which to compare the specified unit file(s) and determine their overall exposure level to security threats. Table 1. Accepted Assessment Test Identifiers ┌─────────────────────────────────────────────────────────┐ │Assessment Test Identifier │ ├─────────────────────────────────────────────────────────┤ │UserOrDynamicUser │ ├─────────────────────────────────────────────────────────┤ │SupplementaryGroups │ ├─────────────────────────────────────────────────────────┤ │PrivateMounts │ ├─────────────────────────────────────────────────────────┤ │PrivateDevices │ ├─────────────────────────────────────────────────────────┤ │PrivateTmp │ ├─────────────────────────────────────────────────────────┤ │PrivateNetwork │ ├─────────────────────────────────────────────────────────┤ │PrivateUsers │ ├─────────────────────────────────────────────────────────┤ │ProtectControlGroups │ ├─────────────────────────────────────────────────────────┤ │ProtectKernelModules │ ├─────────────────────────────────────────────────────────┤ │ProtectKernelTunables │ ├─────────────────────────────────────────────────────────┤ │ProtectKernelLogs │ ├─────────────────────────────────────────────────────────┤ │ProtectClock │ ├─────────────────────────────────────────────────────────┤ │ProtectHome │ ├─────────────────────────────────────────────────────────┤ │ProtectHostname │ ├─────────────────────────────────────────────────────────┤ │ProtectSystem │ ├─────────────────────────────────────────────────────────┤ │RootDirectoryOrRootImage │ ├─────────────────────────────────────────────────────────┤ │LockPersonality │ ├─────────────────────────────────────────────────────────┤ │MemoryDenyWriteExecute │ ├─────────────────────────────────────────────────────────┤ │NoNewPrivileges │ ├─────────────────────────────────────────────────────────┤ │CapabilityBoundingSet_CAP_SYS_ADMIN │ ├─────────────────────────────────────────────────────────┤ │CapabilityBoundingSet_CAP_SET_UID_GID_PCAP │ ├─────────────────────────────────────────────────────────┤ │CapabilityBoundingSet_CAP_SYS_PTRACE │ ├─────────────────────────────────────────────────────────┤ │CapabilityBoundingSet_CAP_SYS_TIME │ ├─────────────────────────────────────────────────────────┤ │CapabilityBoundingSet_CAP_NET_ADMIN │ ├─────────────────────────────────────────────────────────┤ │CapabilityBoundingSet_CAP_SYS_RAWIO │ ├─────────────────────────────────────────────────────────┤ │CapabilityBoundingSet_CAP_SYS_MODULE │ ├─────────────────────────────────────────────────────────┤ │CapabilityBoundingSet_CAP_AUDIT │ ├─────────────────────────────────────────────────────────┤ │CapabilityBoundingSet_CAP_SYSLOG │ ├─────────────────────────────────────────────────────────┤ │CapabilityBoundingSet_CAP_SYS_NICE_RESOURCE │ ├─────────────────────────────────────────────────────────┤ │CapabilityBoundingSet_CAP_MKNOD │ ├─────────────────────────────────────────────────────────┤ │CapabilityBoundingSet_CAP_CHOWN_FSETID_SETFCAP │ ├─────────────────────────────────────────────────────────┤ │CapabilityBoundingSet_CAP_DAC_FOWNER_IPC_OWNER │ ├─────────────────────────────────────────────────────────┤ │CapabilityBoundingSet_CAP_KILL │ ├─────────────────────────────────────────────────────────┤ │CapabilityBoundingSet_CAP_NET_BIND_SERVICE_BROADCAST_RAW │ ├─────────────────────────────────────────────────────────┤ │CapabilityBoundingSet_CAP_SYS_BOOT │ ├─────────────────────────────────────────────────────────┤ │CapabilityBoundingSet_CAP_MAC │ ├─────────────────────────────────────────────────────────┤ │CapabilityBoundingSet_CAP_LINUX_IMMUTABLE │ ├─────────────────────────────────────────────────────────┤ │CapabilityBoundingSet_CAP_IPC_LOCK │ ├─────────────────────────────────────────────────────────┤ │CapabilityBoundingSet_CAP_SYS_CHROOT │ ├─────────────────────────────────────────────────────────┤ │CapabilityBoundingSet_CAP_BLOCK_SUSPEND │ ├─────────────────────────────────────────────────────────┤ │CapabilityBoundingSet_CAP_WAKE_ALARM │ ├─────────────────────────────────────────────────────────┤ │CapabilityBoundingSet_CAP_LEASE │ ├─────────────────────────────────────────────────────────┤ │CapabilityBoundingSet_CAP_SYS_TTY_CONFIG │ ├─────────────────────────────────────────────────────────┤ │UMask │ ├─────────────────────────────────────────────────────────┤ │KeyringMode │ ├─────────────────────────────────────────────────────────┤ │ProtectProc │ ├─────────────────────────────────────────────────────────┤ │ProcSubset │ ├─────────────────────────────────────────────────────────┤ │NotifyAccess │ ├─────────────────────────────────────────────────────────┤ │RemoveIPC │ ├─────────────────────────────────────────────────────────┤ │Delegate │ ├─────────────────────────────────────────────────────────┤ │RestrictRealtime │ ├─────────────────────────────────────────────────────────┤ │RestrictSUIDSGID │ ├─────────────────────────────────────────────────────────┤ │RestrictNamespaces_user │ ├─────────────────────────────────────────────────────────┤ │RestrictNamespaces_mnt │ ├─────────────────────────────────────────────────────────┤ │RestrictNamespaces_ipc │ ├─────────────────────────────────────────────────────────┤ │RestrictNamespaces_pid │ ├─────────────────────────────────────────────────────────┤ │RestrictNamespaces_cgroup │ ├─────────────────────────────────────────────────────────┤ │RestrictNamespaces_uts │ ├─────────────────────────────────────────────────────────┤ │RestrictNamespaces_net │ ├─────────────────────────────────────────────────────────┤ │RestrictAddressFamilies_AF_INET_INET6 │ ├─────────────────────────────────────────────────────────┤ │RestrictAddressFamilies_AF_UNIX │ ├─────────────────────────────────────────────────────────┤ │RestrictAddressFamilies_AF_NETLINK │ ├─────────────────────────────────────────────────────────┤ │RestrictAddressFamilies_AF_PACKET │ ├─────────────────────────────────────────────────────────┤ │RestrictAddressFamilies_OTHER │ ├─────────────────────────────────────────────────────────┤ │SystemCallArchitectures │ ├─────────────────────────────────────────────────────────┤ │SystemCallFilter_swap │ ├─────────────────────────────────────────────────────────┤ │SystemCallFilter_obsolete │ ├─────────────────────────────────────────────────────────┤ │SystemCallFilter_clock │ ├─────────────────────────────────────────────────────────┤ │SystemCallFilter_cpu_emulation │ ├─────────────────────────────────────────────────────────┤ │SystemCallFilter_debug │ ├─────────────────────────────────────────────────────────┤ │SystemCallFilter_mount │ ├─────────────────────────────────────────────────────────┤ │SystemCallFilter_module │ ├─────────────────────────────────────────────────────────┤ │SystemCallFilter_raw_io │ ├─────────────────────────────────────────────────────────┤ │SystemCallFilter_reboot │ ├─────────────────────────────────────────────────────────┤ │SystemCallFilter_privileged │ ├─────────────────────────────────────────────────────────┤ │SystemCallFilter_resources │ ├─────────────────────────────────────────────────────────┤ │IPAddressDeny │ ├─────────────────────────────────────────────────────────┤ │DeviceAllow │ ├─────────────────────────────────────────────────────────┤ │AmbientCapabilities │ └─────────────────────────────────────────────────────────┘ See example "JSON Policy" below. --json=MODE With the security command, generate a JSON formatted output of the security analysis table. The format is a JSON array with objects containing the following fields: set which indicates if the setting has been enabled or not, name which is what is used to refer to the setting, json_field which is the JSON compatible identifier of the setting, description which is an outline of the setting state, and exposure which is a number in the range 0.0...10.0, where a higher value corresponds to a higher security threat. The JSON version of the table is printed to standard output. The MODE passed to the option can be one of three: off which is the default, pretty and short which respectively output a prettified or shorted JSON version of the security table. With the plot command, generate a JSON formatted output of the raw time data. The format is a JSON array with objects containing the following fields: name which is the unit name, activated which is the time after startup the service was activated, activating which is how long after startup the service was initially started, time which is how long the service took to activate from when it was initially started, deactivated which is the time after startup that the service was deactivated, deactivating which is the time after startup that the service was initially told to deactivate. --iterations=NUMBER When used with the calendar command, show the specified number of iterations the specified calendar expression will elapse next. Defaults to 1. --base-time=TIMESTAMP When used with the calendar command, show next iterations relative to the specified point in time. If not specified defaults to the current time. --unit=UNIT When used with the condition command, evaluate all the Condition*=... and Assert*=... assignments in the specified unit file. The full unit search path is formed by combining the directories for the specified unit with the usual unit load paths. The variable $SYSTEMD_UNIT_PATH is supported, and may be used to replace or augment the compiled in set of unit load paths; see systemd.unit(5). All units files present in the directory containing the specified unit will be used in preference to the other paths. --table When used with the plot command, the raw time data is output in a table. --no-legend When used with the plot command in combination with either --table or --json=, no legends or hints are included in the output. -H, --host= Execute the operation remotely. Specify a hostname, or a username and hostname separated by "@", to connect to. The hostname may optionally be suffixed by a port ssh is listening on, separated by ":", and then a container name, separated by "/", which connects directly to a specific container on the specified host. This will use SSH to talk to the remote machine manager instance. Container names may be enumerated with machinectl -H HOST. Put IPv6 addresses in brackets. -M, --machine= Execute operation on a local container. Specify a container name to connect to, optionally prefixed by a user name to connect as and a separating "@" character. If the special string ".host" is used in place of the container name, a connection to the local system is made (which is useful to connect to a specific user's user bus: "--user --machine=lennart@.host"). If the "@" syntax is not used, the connection is made as root user. If the "@" syntax is used either the left hand side or the right hand side may be omitted (but not both) in which case the local user name and ".host" are implied. --quiet Suppress hints and other non-essential output. -h, --help Print a short help text and exit. --version Print a short version string and exit. --no-pager Do not pipe output into a pager. | # systemd-analyze
> Analyze and debug system manager. Show timing details about the boot process
> of units (services, mount points, devices, sockets). More information:
> https://www.freedesktop.org/software/systemd/man/systemd-analyze.html.
* List all running units, ordered by the time they took to initialize:
`systemd-analyze blame`
* Print a tree of the time-critical chain of units:
`systemd-analyze critical-chain`
* Create an SVG file showing when each system service started, highlighting the time that they spent on initialization:
`systemd-analyze plot > {{path/to/file.svg}}`
* Plot a dependency graph and convert it to an SVG file:
`systemd-analyze dot | dot -T{{svg}} > {{path/to/file.svg}}`
* Show security scores of running units:
`systemd-analyze security` |
timeout | Start COMMAND, and kill it if still running after DURATION. Mandatory arguments to long options are mandatory for short options too. --preserve-status exit with the same status as COMMAND, even when the command times out --foreground when not running timeout directly from a shell prompt, allow COMMAND to read from the TTY and get TTY signals; in this mode, children of COMMAND will not be timed out -k, --kill-after=DURATION also send a KILL signal if COMMAND is still running this long after the initial signal was sent -s, --signal=SIGNAL specify the signal to be sent on timeout; SIGNAL may be a name like 'HUP' or a number; see 'kill -l' for a list of signals -v, --verbose diagnose to stderr any signal sent upon timeout --help display this help and exit --version output version information and exit DURATION is a floating point number with an optional suffix: 's' for seconds (the default), 'm' for minutes, 'h' for hours or 'd' for days. A duration of 0 disables the associated timeout. Upon timeout, send the TERM signal to COMMAND, if no other SIGNAL specified. The TERM signal kills any process that does not block or catch that signal. It may be necessary to use the KILL signal, since this signal can't be caught. Exit status: 124 if COMMAND times out, and --preserve-status is not specified 125 if the timeout command itself fails 126 if COMMAND is found but cannot be invoked 127 if COMMAND cannot be found 137 if COMMAND (or timeout itself) is sent the KILL (9) signal (128+9) - the exit status of COMMAND otherwise | # timeout
> Run a command with a time limit. More information:
> https://www.gnu.org/software/coreutils/timeout.
* Run `sleep 10` and terminate it, if it runs for more than 3 seconds:
`timeout {{3s}} {{sleep 10}}`
* Specify the signal to be sent to the command after the time limit expires. (By default, TERM is sent):
`timeout --signal {{INT}} {{5s}} {{sleep 10}}` |
git-am | Splits mail messages in a mailbox into commit log message, authorship information and patches, and applies them to the current branch. You could think of it as a reverse operation of git-format-patch(1) run on a branch with a straight history without merges. (<mbox>|<Maildir>)... The list of mailbox files to read patches from. If you do not supply this argument, the command reads from the standard input. If you supply directories, they will be treated as Maildirs. -s, --signoff Add a Signed-off-by trailer to the commit message, using the committer identity of yourself. See the signoff option in git-commit(1) for more information. -k, --keep Pass -k flag to git mailinfo (see git-mailinfo(1)). --keep-non-patch Pass -b flag to git mailinfo (see git-mailinfo(1)). --[no-]keep-cr With --keep-cr, call git mailsplit (see git-mailsplit(1)) with the same option, to prevent it from stripping CR at the end of lines. am.keepcr configuration variable can be used to specify the default behaviour. --no-keep-cr is useful to override am.keepcr. -c, --scissors Remove everything in body before a scissors line (see git-mailinfo(1)). Can be activated by default using the mailinfo.scissors configuration variable. --no-scissors Ignore scissors lines (see git-mailinfo(1)). --quoted-cr=<action> This flag will be passed down to git mailinfo (see git-mailinfo(1)). --empty=(stop|drop|keep) By default, or when the option is set to stop, the command errors out on an input e-mail message lacking a patch and stops into the middle of the current am session. When this option is set to drop, skip such an e-mail message instead. When this option is set to keep, create an empty commit, recording the contents of the e-mail message as its log. -m, --message-id Pass the -m flag to git mailinfo (see git-mailinfo(1)), so that the Message-ID header is added to the commit message. The am.messageid configuration variable can be used to specify the default behaviour. --no-message-id Do not add the Message-ID header to the commit message. no-message-id is useful to override am.messageid. -q, --quiet Be quiet. Only print error messages. -u, --utf8 Pass -u flag to git mailinfo (see git-mailinfo(1)). The proposed commit log message taken from the e-mail is re-coded into UTF-8 encoding (configuration variable i18n.commitEncoding can be used to specify project’s preferred encoding if it is not UTF-8). This was optional in prior versions of git, but now it is the default. You can use --no-utf8 to override this. --no-utf8 Pass -n flag to git mailinfo (see git-mailinfo(1)). -3, --3way, --no-3way When the patch does not apply cleanly, fall back on 3-way merge if the patch records the identity of blobs it is supposed to apply to and we have those blobs available locally. --no-3way can be used to override am.threeWay configuration variable. For more information, see am.threeWay in git-config(1). --rerere-autoupdate, --no-rerere-autoupdate After the rerere mechanism reuses a recorded resolution on the current conflict to update the files in the working tree, allow it to also update the index with the result of resolution. --no-rerere-autoupdate is a good way to double-check what rerere did and catch potential mismerges, before committing the result to the index with a separate git add. --ignore-space-change, --ignore-whitespace, --whitespace=<option>, -C<n>, -p<n>, --directory=<dir>, --exclude=<path>, --include=<path>, --reject These flags are passed to the git apply (see git-apply(1)) program that applies the patch. --patch-format By default the command will try to detect the patch format automatically. This option allows the user to bypass the automatic detection and specify the patch format that the patch(es) should be interpreted as. Valid formats are mbox, mboxrd, stgit, stgit-series and hg. -i, --interactive Run interactively. -n, --no-verify By default, the pre-applypatch and applypatch-msg hooks are run. When any of --no-verify or -n is given, these are bypassed. See also githooks(5). --committer-date-is-author-date By default the command records the date from the e-mail message as the commit author date, and uses the time of commit creation as the committer date. This allows the user to lie about the committer date by using the same value as the author date. --ignore-date By default the command records the date from the e-mail message as the commit author date, and uses the time of commit creation as the committer date. This allows the user to lie about the author date by using the same value as the committer date. --skip Skip the current patch. This is only meaningful when restarting an aborted patch. -S[<keyid>], --gpg-sign[=<keyid>], --no-gpg-sign GPG-sign commits. The keyid argument is optional and defaults to the committer identity; if specified, it must be stuck to the option without a space. --no-gpg-sign is useful to countermand both commit.gpgSign configuration variable, and earlier --gpg-sign. --continue, -r, --resolved After a patch failure (e.g. attempting to apply conflicting patch), the user has applied it by hand and the index file stores the result of the application. Make a commit using the authorship and commit log extracted from the e-mail message and the current index file, and continue. --resolvemsg=<msg> When a patch failure occurs, <msg> will be printed to the screen before exiting. This overrides the standard message informing you to use --continue or --skip to handle the failure. This is solely for internal use between git rebase and git am. --abort Restore the original branch and abort the patching operation. Revert contents of files involved in the am operation to their pre-am state. --quit Abort the patching operation but keep HEAD and the index untouched. --show-current-patch[=(diff|raw)] Show the message at which git am has stopped due to conflicts. If raw is specified, show the raw contents of the e-mail message; if diff, show the diff portion only. Defaults to raw. --allow-empty After a patch failure on an input e-mail message lacking a patch, create an empty commit with the contents of the e-mail message as its log message. | # git am
> Apply patch files and create a commit. Useful when receiving commits via
> email. See also `git format-patch`, which can generate patch files. More
> information: https://git-scm.com/docs/git-am.
* Apply and commit changes following a local patch file:
`git am {{path/to/file.patch}}`
* Apply and commit changes following a remote patch file:
`curl -L {{https://example.com/file.patch}} | git apply`
* Abort the process of applying a patch file:
`git am --abort`
* Apply as much of a patch file as possible, saving failed hunks to reject files:
`git am --reject {{path/to/file.patch}}` |
strings | The strings utility shall look for printable strings in regular files and shall write those strings to standard output. A printable string is any sequence of four (by default) or more printable characters terminated by a <newline> or NUL character. Additional implementation-defined strings may be written; see localedef. If the first argument is '-', the results are unspecified. The strings utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines, except for the unspecified usage of '-'. The following options shall be supported: -a Scan files in their entirety. If -a is not specified, it is implementation-defined what portion of each file is scanned for strings. -n number Specify the minimum string length, where the number argument is a positive decimal integer. The default shall be 4. -t format Write each string preceded by its byte offset from the start of the file. The format shall be dependent on the single character used as the format option-argument: d The offset shall be written in decimal. o The offset shall be written in octal. x The offset shall be written in hexadecimal. | # strings
> Find printable strings in an object file or binary. More information:
> https://manned.org/strings.
* Print all strings in a binary:
`strings {{path/to/file}}`
* Limit results to strings at least length characters long:
`strings -n {{length}} {{path/to/file}}`
* Prefix each result with its offset within the file:
`strings -t d {{path/to/file}}`
* Prefix each result with its offset within the file in hexadecimal:
`strings -t x {{path/to/file}}` |
git-pull | Incorporates changes from a remote repository into the current branch. If the current branch is behind the remote, then by default it will fast-forward the current branch to match the remote. If the current branch and the remote have diverged, the user needs to specify how to reconcile the divergent branches with --rebase or --no-rebase (or the corresponding configuration option in pull.rebase). More precisely, git pull runs git fetch with the given parameters and then depending on configuration options or command line flags, will call either git rebase or git merge to reconcile diverging branches. <repository> should be the name of a remote repository as passed to git-fetch(1). <refspec> can name an arbitrary remote ref (for example, the name of a tag) or even a collection of refs with corresponding remote-tracking branches (e.g., refs/heads/*:refs/remotes/origin/*), but usually it is the name of a branch in the remote repository. Default values for <repository> and <branch> are read from the "remote" and "merge" configuration for the current branch as set by git-branch(1) --track. Assume the following history exists and the current branch is "master": A---B---C master on origin / D---E---F---G master ^ origin/master in your repository Then "git pull" will fetch and replay the changes from the remote master branch since it diverged from the local master (i.e., E) until its current commit (C) on top of master and record the result in a new commit along with the names of the two parent commits and a log message from the user describing the changes. A---B---C origin/master / \ D---E---F---G---H master See git-merge(1) for details, including how conflicts are presented and handled. In Git 1.7.0 or later, to cancel a conflicting merge, use git reset --merge. Warning: In older versions of Git, running git pull with uncommitted changes is discouraged: while possible, it leaves you in a state that may be hard to back out of in the case of a conflict. If any of the remote changes overlap with local uncommitted changes, the merge will be automatically canceled and the work tree untouched. It is generally best to get any local changes in working order before pulling or stash them away with git-stash(1). -q, --quiet This is passed to both underlying git-fetch to squelch reporting of during transfer, and underlying git-merge to squelch output during merging. -v, --verbose Pass --verbose to git-fetch and git-merge. --[no-]recurse-submodules[=yes|on-demand|no] This option controls if new commits of populated submodules should be fetched, and if the working trees of active submodules should be updated, too (see git-fetch(1), git-config(1) and gitmodules(5)). If the checkout is done via rebase, local submodule commits are rebased as well. If the update is done via merge, the submodule conflicts are resolved and checked out. Options related to merging --commit, --no-commit Perform the merge and commit the result. This option can be used to override --no-commit. Only useful when merging. With --no-commit perform the merge and stop just before creating a merge commit, to give the user a chance to inspect and further tweak the merge result before committing. Note that fast-forward updates do not create a merge commit and therefore there is no way to stop those merges with --no-commit. Thus, if you want to ensure your branch is not changed or updated by the merge command, use --no-ff with --no-commit. --edit, -e, --no-edit Invoke an editor before committing successful mechanical merge to further edit the auto-generated merge message, so that the user can explain and justify the merge. The --no-edit option can be used to accept the auto-generated message (this is generally discouraged). Older scripts may depend on the historical behaviour of not allowing the user to edit the merge log message. They will see an editor opened when they run git merge. To make it easier to adjust such scripts to the updated behaviour, the environment variable GIT_MERGE_AUTOEDIT can be set to no at the beginning of them. --cleanup=<mode> This option determines how the merge message will be cleaned up before committing. See git-commit(1) for more details. In addition, if the <mode> is given a value of scissors, scissors will be appended to MERGE_MSG before being passed on to the commit machinery in the case of a merge conflict. --ff-only Only update to the new history if there is no divergent local history. This is the default when no method for reconciling divergent histories is provided (via the --rebase=* flags). --ff, --no-ff When merging rather than rebasing, specifies how a merge is handled when the merged-in history is already a descendant of the current history. If merging is requested, --ff is the default unless merging an annotated (and possibly signed) tag that is not stored in its natural place in the refs/tags/ hierarchy, in which case --no-ff is assumed. With --ff, when possible resolve the merge as a fast-forward (only update the branch pointer to match the merged branch; do not create a merge commit). When not possible (when the merged-in history is not a descendant of the current history), create a merge commit. With --no-ff, create a merge commit in all cases, even when the merge could instead be resolved as a fast-forward. -S[<keyid>], --gpg-sign[=<keyid>], --no-gpg-sign GPG-sign the resulting merge commit. The keyid argument is optional and defaults to the committer identity; if specified, it must be stuck to the option without a space. --no-gpg-sign is useful to countermand both commit.gpgSign configuration variable, and earlier --gpg-sign. --log[=<n>], --no-log In addition to branch names, populate the log message with one-line descriptions from at most <n> actual commits that are being merged. See also git-fmt-merge-msg(1). Only useful when merging. With --no-log do not list one-line descriptions from the actual commits being merged. --signoff, --no-signoff Add a Signed-off-by trailer by the committer at the end of the commit log message. The meaning of a signoff depends on the project to which you’re committing. For example, it may certify that the committer has the rights to submit the work under the project’s license or agrees to some contributor representation, such as a Developer Certificate of Origin. (See http://developercertificate.org for the one used by the Linux kernel and Git projects.) Consult the documentation or leadership of the project to which you’re contributing to understand how the signoffs are used in that project. The --no-signoff option can be used to countermand an earlier --signoff option on the command line. --stat, -n, --no-stat Show a diffstat at the end of the merge. The diffstat is also controlled by the configuration option merge.stat. With -n or --no-stat do not show a diffstat at the end of the merge. --squash, --no-squash Produce the working tree and index state as if a real merge happened (except for the merge information), but do not actually make a commit, move the HEAD, or record $GIT_DIR/MERGE_HEAD (to cause the next git commit command to create a merge commit). This allows you to create a single commit on top of the current branch whose effect is the same as merging another branch (or more in case of an octopus). With --no-squash perform the merge and commit the result. This option can be used to override --squash. With --squash, --commit is not allowed, and will fail. Only useful when merging. --[no-]verify By default, the pre-merge and commit-msg hooks are run. When --no-verify is given, these are bypassed. See also githooks(5). Only useful when merging. -s <strategy>, --strategy=<strategy> Use the given merge strategy; can be supplied more than once to specify them in the order they should be tried. If there is no -s option, a built-in list of strategies is used instead (ort when merging a single head, octopus otherwise). -X <option>, --strategy-option=<option> Pass merge strategy specific option through to the merge strategy. --verify-signatures, --no-verify-signatures Verify that the tip commit of the side branch being merged is signed with a valid key, i.e. a key that has a valid uid: in the default trust model, this means the signing key has been signed by a trusted key. If the tip commit of the side branch is not signed with a valid key, the merge is aborted. Only useful when merging. --summary, --no-summary Synonyms to --stat and --no-stat; these are deprecated and will be removed in the future. --autostash, --no-autostash Automatically create a temporary stash entry before the operation begins, record it in the special ref MERGE_AUTOSTASH and apply it after the operation ends. This means that you can run the operation on a dirty worktree. However, use with care: the final stash application after a successful merge might result in non-trivial conflicts. --allow-unrelated-histories By default, git merge command refuses to merge histories that do not share a common ancestor. This option can be used to override this safety when merging histories of two projects that started their lives independently. As that is a very rare occasion, no configuration variable to enable this by default exists and will not be added. Only useful when merging. -r, --rebase[=false|true|merges|interactive] When true, rebase the current branch on top of the upstream branch after fetching. If there is a remote-tracking branch corresponding to the upstream branch and the upstream branch was rebased since last fetched, the rebase uses that information to avoid rebasing non-local changes. When set to merges, rebase using git rebase --rebase-merges so that the local merge commits are included in the rebase (see git-rebase(1) for details). When false, merge the upstream branch into the current branch. When interactive, enable the interactive mode of rebase. See pull.rebase, branch.<name>.rebase and branch.autoSetupRebase in git-config(1) if you want to make git pull always use --rebase instead of merging. Note This is a potentially dangerous mode of operation. It rewrites history, which does not bode well when you published that history already. Do not use this option unless you have read git-rebase(1) carefully. --no-rebase This is shorthand for --rebase=false. Options related to fetching --all Fetch all remotes. -a, --append Append ref names and object names of fetched refs to the existing contents of .git/FETCH_HEAD. Without this option old data in .git/FETCH_HEAD will be overwritten. --atomic Use an atomic transaction to update local refs. Either all refs are updated, or on error, no refs are updated. --depth=<depth> Limit fetching to the specified number of commits from the tip of each remote branch history. If fetching to a shallow repository created by git clone with --depth=<depth> option (see git-clone(1)), deepen or shorten the history to the specified number of commits. Tags for the deepened commits are not fetched. --deepen=<depth> Similar to --depth, except it specifies the number of commits from the current shallow boundary instead of from the tip of each remote branch history. --shallow-since=<date> Deepen or shorten the history of a shallow repository to include all reachable commits after <date>. --shallow-exclude=<revision> Deepen or shorten the history of a shallow repository to exclude commits reachable from a specified remote branch or tag. This option can be specified multiple times. --unshallow If the source repository is complete, convert a shallow repository to a complete one, removing all the limitations imposed by shallow repositories. If the source repository is shallow, fetch as much as possible so that the current repository has the same history as the source repository. --update-shallow By default when fetching from a shallow repository, git fetch refuses refs that require updating .git/shallow. This option updates .git/shallow and accept such refs. --negotiation-tip=<commit|glob> By default, Git will report, to the server, commits reachable from all local refs to find common commits in an attempt to reduce the size of the to-be-received packfile. If specified, Git will only report commits reachable from the given tips. This is useful to speed up fetches when the user knows which local ref is likely to have commits in common with the upstream ref being fetched. This option may be specified more than once; if so, Git will report commits reachable from any of the given commits. The argument to this option may be a glob on ref names, a ref, or the (possibly abbreviated) SHA-1 of a commit. Specifying a glob is equivalent to specifying this option multiple times, one for each matching ref name. See also the fetch.negotiationAlgorithm and push.negotiate configuration variables documented in git-config(1), and the --negotiate-only option below. --negotiate-only Do not fetch anything from the server, and instead print the ancestors of the provided --negotiation-tip=* arguments, which we have in common with the server. This is incompatible with --recurse-submodules=[yes|on-demand]. Internally this is used to implement the push.negotiate option, see git-config(1). --dry-run Show what would be done, without making any changes. --porcelain Print the output to standard output in an easy-to-parse format for scripts. See section OUTPUT in git-fetch(1) for details. This is incompatible with --recurse-submodules=[yes|on-demand] and takes precedence over the fetch.output config option. -f, --force When git fetch is used with <src>:<dst> refspec it may refuse to update the local branch as discussed in the <refspec> part of the git-fetch(1) documentation. This option overrides that check. -k, --keep Keep downloaded pack. --prefetch Modify the configured refspec to place all refs into the refs/prefetch/ namespace. See the prefetch task in git-maintenance(1). -p, --prune Before fetching, remove any remote-tracking references that no longer exist on the remote. Tags are not subject to pruning if they are fetched only because of the default tag auto-following or due to a --tags option. However, if tags are fetched due to an explicit refspec (either on the command line or in the remote configuration, for example if the remote was cloned with the --mirror option), then they are also subject to pruning. Supplying --prune-tags is a shorthand for providing the tag refspec. --no-tags By default, tags that point at objects that are downloaded from the remote repository are fetched and stored locally. This option disables this automatic tag following. The default behavior for a remote may be specified with the remote.<name>.tagOpt setting. See git-config(1). --refmap=<refspec> When fetching refs listed on the command line, use the specified refspec (can be given more than once) to map the refs to remote-tracking branches, instead of the values of remote.*.fetch configuration variables for the remote repository. Providing an empty <refspec> to the --refmap option causes Git to ignore the configured refspecs and rely entirely on the refspecs supplied as command-line arguments. See section on "Configured Remote-tracking Branches" for details. -t, --tags Fetch all tags from the remote (i.e., fetch remote tags refs/tags/* into local tags with the same name), in addition to whatever else would otherwise be fetched. Using this option alone does not subject tags to pruning, even if --prune is used (though tags may be pruned anyway if they are also the destination of an explicit refspec; see --prune). -j, --jobs=<n> Number of parallel children to be used for all forms of fetching. If the --multiple option was specified, the different remotes will be fetched in parallel. If multiple submodules are fetched, they will be fetched in parallel. To control them independently, use the config settings fetch.parallel and submodule.fetchJobs (see git-config(1)). Typically, parallel recursive and multi-remote fetches will be faster. By default fetches are performed sequentially, not in parallel. --set-upstream If the remote is fetched successfully, add upstream (tracking) reference, used by argument-less git-pull(1) and other commands. For more information, see branch.<name>.merge and branch.<name>.remote in git-config(1). --upload-pack <upload-pack> When given, and the repository to fetch from is handled by git fetch-pack, --exec=<upload-pack> is passed to the command to specify non-default path for the command run on the other end. --progress Progress status is reported on the standard error stream by default when it is attached to a terminal, unless -q is specified. This flag forces progress status even if the standard error stream is not directed to a terminal. -o <option>, --server-option=<option> Transmit the given string to the server when communicating using protocol version 2. The given string must not contain a NUL or LF character. The server’s handling of server options, including unknown ones, is server-specific. When multiple --server-option=<option> are given, they are all sent to the other side in the order listed on the command line. --show-forced-updates By default, git checks if a branch is force-updated during fetch. This can be disabled through fetch.showForcedUpdates, but the --show-forced-updates option guarantees this check occurs. See git-config(1). --no-show-forced-updates By default, git checks if a branch is force-updated during fetch. Pass --no-show-forced-updates or set fetch.showForcedUpdates to false to skip this check for performance reasons. If used during git-pull the --ff-only option will still check for forced updates before attempting a fast-forward update. See git-config(1). -4, --ipv4 Use IPv4 addresses only, ignoring IPv6 addresses. -6, --ipv6 Use IPv6 addresses only, ignoring IPv4 addresses. <repository> The "remote" repository that is the source of a fetch or pull operation. This parameter can be either a URL (see the section GIT URLS below) or the name of a remote (see the section REMOTES below). <refspec> Specifies which refs to fetch and which local refs to update. When no <refspec>s appear on the command line, the refs to fetch are read from remote.<repository>.fetch variables instead (see the section "CONFIGURED REMOTE-TRACKING BRANCHES" in git-fetch(1)). The format of a <refspec> parameter is an optional plus +, followed by the source <src>, followed by a colon :, followed by the destination ref <dst>. The colon can be omitted when <dst> is empty. <src> is typically a ref, but it can also be a fully spelled hex object name. A <refspec> may contain a * in its <src> to indicate a simple pattern match. Such a refspec functions like a glob that matches any ref with the same prefix. A pattern <refspec> must have a * in both the <src> and <dst>. It will map refs to the destination by replacing the * with the contents matched from the source. If a refspec is prefixed by ^, it will be interpreted as a negative refspec. Rather than specifying which refs to fetch or which local refs to update, such a refspec will instead specify refs to exclude. A ref will be considered to match if it matches at least one positive refspec, and does not match any negative refspec. Negative refspecs can be useful to restrict the scope of a pattern refspec so that it will not include specific refs. Negative refspecs can themselves be pattern refspecs. However, they may only contain a <src> and do not specify a <dst>. Fully spelled out hex object names are also not supported. tag <tag> means the same as refs/tags/<tag>:refs/tags/<tag>; it requests fetching everything up to the given tag. The remote ref that matches <src> is fetched, and if <dst> is not an empty string, an attempt is made to update the local ref that matches it. Whether that update is allowed without --force depends on the ref namespace it’s being fetched to, the type of object being fetched, and whether the update is considered to be a fast-forward. Generally, the same rules apply for fetching as when pushing, see the <refspec>... section of git-push(1) for what those are. Exceptions to those rules particular to git fetch are noted below. Until Git version 2.20, and unlike when pushing with git-push(1), any updates to refs/tags/* would be accepted without + in the refspec (or --force). When fetching, we promiscuously considered all tag updates from a remote to be forced fetches. Since Git version 2.20, fetching to update refs/tags/* works the same way as when pushing. I.e. any updates will be rejected without + in the refspec (or --force). Unlike when pushing with git-push(1), any updates outside of refs/{tags,heads}/* will be accepted without + in the refspec (or --force), whether that’s swapping e.g. a tree object for a blob, or a commit for another commit that’s doesn’t have the previous commit as an ancestor etc. Unlike when pushing with git-push(1), there is no configuration which’ll amend these rules, and nothing like a pre-fetch hook analogous to the pre-receive hook. As with pushing with git-push(1), all of the rules described above about what’s not allowed as an update can be overridden by adding an the optional leading + to a refspec (or using --force command line option). The only exception to this is that no amount of forcing will make the refs/heads/* namespace accept a non-commit object. Note When the remote branch you want to fetch is known to be rewound and rebased regularly, it is expected that its new tip will not be descendant of its previous tip (as stored in your remote-tracking branch the last time you fetched). You would want to use the + sign to indicate non-fast-forward updates will be needed for such branches. There is no way to determine or declare that a branch will be made available in a repository with this behavior; the pulling user simply must know this is the expected usage pattern for a branch. Note There is a difference between listing multiple <refspec> directly on git pull command line and having multiple remote.<repository>.fetch entries in your configuration for a <repository> and running a git pull command without any explicit <refspec> parameters. <refspec>s listed explicitly on the command line are always merged into the current branch after fetching. In other words, if you list more than one remote ref, git pull will create an Octopus merge. On the other hand, if you do not list any explicit <refspec> parameter on the command line, git pull will fetch all the <refspec>s it finds in the remote.<repository>.fetch configuration and merge only the first <refspec> found into the current branch. This is because making an Octopus from remote refs is rarely done, while keeping track of multiple remote heads in one-go by fetching more than one is often useful. | # git pull
> Fetch branch from a remote repository and merge it to local repository. More
> information: https://git-scm.com/docs/git-pull.
* Download changes from default remote repository and merge it:
`git pull`
* Download changes from default remote repository and use fast-forward:
`git pull --rebase`
* Download changes from given remote repository and branch, then merge them into HEAD:
`git pull {{remote_name}} {{branch}}` |
yacc | The yacc utility shall read a description of a context-free grammar in grammar and write C source code, conforming to the ISO C standard, to a code file, and optionally header information into a header file, in the current directory. The generated source code shall not depend on any undefined, unspecified, or implementation-defined behavior, except in cases where it is copied directly from the supplied grammar, or in cases that are documented by the implementation. The C code shall define a function and related routines and macros for an automaton that executes a parsing algorithm meeting the requirements in Algorithms. The form and meaning of the grammar are described in the EXTENDED DESCRIPTION section. The C source code and header file shall be produced in a form suitable as input for the C compiler (see c99(1p)). The yacc utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines, except for Guideline 9. The following options shall be supported: -b file_prefix Use file_prefix instead of y as the prefix for all output filenames. The code file y.tab.c, the header file y.tab.h (created when -d is specified), and the description file y.output (created when -v is specified), shall be changed to file_prefix.tab.c, file_prefix.tab.h, and file_prefix.output, respectively. -d Write the header file; by default only the code file is written. See the OUTPUT FILES section. -l Produce a code file that does not contain any #line constructs. If this option is not present, it is unspecified whether the code file or header file contains #line directives. This should only be used after the grammar and the associated actions are fully debugged. -p sym_prefix Use sym_prefix instead of yy as the prefix for all external names produced by yacc. The names affected shall include the functions yyparse(), yylex(), and yyerror(), and the variables yylval, yychar, and yydebug. (In the remainder of this section, the six symbols cited are referenced using their default names only as a notational convenience.) Local names may also be affected by the -p option; however, the -p option shall not affect #define symbols generated by yacc. -t Modify conditional compilation directives to permit compilation of debugging code in the code file. Runtime debugging statements shall always be contained in the code file, but by default conditional compilation directives prevent their compilation. -v Write a file containing a description of the parser and a report of conflicts generated by ambiguities in the grammar. | # yacc
> Generate an LALR parser (in C) with a given formal grammar specification
> file. See also: `bison`. More information: https://manned.org/man/yacc.1p.
* Create a file `y.tab.c` containing the C parser code and compile the grammar file with all necessary constant declarations for values. (Constant declarations file `y.tab.h` is created only when the `-d` flag is used):
`yacc -d {{path/to/grammar_file.y}}`
* Compile a grammar file containing the description of the parser and a report of conflicts generated by ambiguities in the grammar:
`yacc -d {{path/to/grammar_file.y}} -v`
* Compile a grammar file, and prefix output filenames with `prefix` instead of `y`:
`yacc -d {{path/to/grammar_file.y}} -v -b {{prefix}}` |
df | This manual page documents the GNU version of df. df displays the amount of space available on the file system containing each file name argument. If no file name is given, the space available on all currently mounted file systems is shown. Space is shown in 1K blocks by default, unless the environment variable POSIXLY_CORRECT is set, in which case 512-byte blocks are used. If an argument is the absolute file name of a device node containing a mounted file system, df shows the space available on that file system rather than on the file system containing the device node. This version of df cannot show the space available on unmounted file systems, because on most kinds of systems doing so requires very nonportable intimate knowledge of file system structures. Show information about the file system on which each FILE resides, or all file systems by default. Mandatory arguments to long options are mandatory for short options too. -a, --all include pseudo, duplicate, inaccessible file systems -B, --block-size=SIZE scale sizes by SIZE before printing them; e.g., '-BM' prints sizes in units of 1,048,576 bytes; see SIZE format below -h, --human-readable print sizes in powers of 1024 (e.g., 1023M) -H, --si print sizes in powers of 1000 (e.g., 1.1G) -i, --inodes list inode information instead of block usage -k like --block-size=1K -l, --local limit listing to local file systems --no-sync do not invoke sync before getting usage info (default) --output[=FIELD_LIST] use the output format defined by FIELD_LIST, or print all fields if FIELD_LIST is omitted. -P, --portability use the POSIX output format --sync invoke sync before getting usage info --total elide all entries insignificant to available space, and produce a grand total -t, --type=TYPE limit listing to file systems of type TYPE -T, --print-type print file system type -x, --exclude-type=TYPE limit listing to file systems not of type TYPE -v (ignored) --help display this help and exit --version output version information and exit Display values are in units of the first available SIZE from --block-size, and the DF_BLOCK_SIZE, BLOCK_SIZE and BLOCKSIZE environment variables. Otherwise, units default to 1024 bytes (or 512 if POSIXLY_CORRECT is set). The SIZE argument is an integer and optional unit (example: 10K is 10*1024). Units are K,M,G,T,P,E,Z,Y,R,Q (powers of 1024) or KB,MB,... (powers of 1000). Binary prefixes can be used, too: KiB=K, MiB=M, and so on. FIELD_LIST is a comma-separated list of columns to be included. Valid field names are: 'source', 'fstype', 'itotal', 'iused', 'iavail', 'ipcent', 'size', 'used', 'avail', 'pcent', 'file' and 'target' (see info page). | # df
> Gives an overview of the filesystem disk space usage. More information:
> https://www.gnu.org/software/coreutils/df.
* Display all filesystems and their disk usage:
`df`
* Display all filesystems and their disk usage in human-readable form:
`df -h`
* Display the filesystem and its disk usage containing the given file or directory:
`df {{path/to/file_or_directory}}`
* Display statistics on the number of free inodes:
`df -i`
* Display filesystems but exclude the specified types:
`df -x {{squashfs}} -x {{tmpfs}}` |
ipcmk | ipcmk allows you to create System V inter-process communication (IPC) objects: shared memory segments, message queues, and semaphore arrays. Resources can be specified with these options: -M, --shmem size Create a shared memory segment of size bytes. The size argument may be followed by the multiplicative suffixes KiB (=1024), MiB (=1024*1024), and so on for GiB, etc. (the "iB" is optional, e.g., "K" has the same meaning as "KiB") or the suffixes KB (=1000), MB (=1000*1000), and so on for GB, etc. -Q, --queue Create a message queue. -S, --semaphore number Create a semaphore array with number of elements. Other options are: -p, --mode mode Access permissions for the resource. Default is 0644. -h, --help Display help text and exit. -V, --version Print version and exit. | # ipcmk
> Create IPC (Inter-process Communication) resources. More information:
> https://manned.org/ipcmk.
* Create a shared memory segment:
`ipcmk --shmem {{segment_size_in_bytes}}`
* Create a semaphore:
`ipcmk --semaphore {{element_size}}`
* Create a message queue:
`ipcmk --queue`
* Create a shared memory segment with specific permissions (default is 0644):
`ipcmk --shmem {{segment_size_in_bytes}} {{octal_permissions}}` |
newgrp | The newgrp utility shall create a new shell execution environment with a new real and effective group identification. Of the attributes listed in Section 2.12, Shell Execution Environment, the new shell execution environment shall retain the working directory, file creation mask, and exported variables from the previous environment (that is, open files, traps, unexported variables, alias definitions, shell functions, and set options may be lost). All other aspects of the process environment that are preserved by the exec family of functions defined in the System Interfaces volume of POSIX.1‐2017 shall also be preserved by newgrp; whether other aspects are preserved is unspecified. A failure to assign the new group identifications (for example, for security or password-related reasons) shall not prevent the new shell execution environment from being created. The newgrp utility shall affect the supplemental groups for the process as follows: * On systems where the effective group ID is normally in the supplementary group list (or whenever the old effective group ID actually is in the supplementary group list): -- If the new effective group ID is also in the supplementary group list, newgrp shall change the effective group ID. -- If the new effective group ID is not in the supplementary group list, newgrp shall add the new effective group ID to the list, if there is room to add it. * On systems where the effective group ID is not normally in the supplementary group list (or whenever the old effective group ID is not in the supplementary group list): -- If the new effective group ID is in the supplementary group list, newgrp shall delete it. -- If the old effective group ID is not in the supplementary list, newgrp shall add it if there is room. Note: The System Interfaces volume of POSIX.1‐2017 does not specify whether the effective group ID of a process is included in its supplementary group list. With no operands, newgrp shall change the effective group back to the groups identified in the user's user entry, and shall set the list of supplementary groups to that set in the user's group database entries. If the first argument is '-', the results are unspecified. If a password is required for the specified group, and the user is not listed as a member of that group in the group database, the user shall be prompted to enter the correct password for that group. If the user is listed as a member of that group, no password shall be requested. If no password is required for the specified group, it is implementation-defined whether users not listed as members of that group can change to that group. Whether or not a password is required, implementation-defined system accounting or security mechanisms may impose additional authorization restrictions that may cause newgrp to write a diagnostic message and suppress the changing of the group identification. The newgrp utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines, except for the unspecified usage of '-'. The following option shall be supported: -l (The letter ell.) Change the environment to what would be expected if the user actually logged in again. | # newgrp
> Switch primary group membership. More information:
> https://manned.org/newgrp.
* Change user's primary group membership:
`newgrp {{group_name}}`
* Reset primary group membership to user's default group in `/etc/passwd`:
`newgrp` |
ssh-agent | ssh-agent is a program to hold private keys used for public key authentication. Through use of environment variables the agent can be located and automatically used for authentication when logging in to other machines using ssh(1). The options are as follows: -a bind_address Bind the agent to the UNIX-domain socket bind_address. The default is $TMPDIR/ssh-XXXXXXXXXX/agent.<ppid>. -c Generate C-shell commands on stdout. This is the default if SHELL looks like it's a csh style of shell. -D Foreground mode. When this option is specified, ssh-agent will not fork. -d Debug mode. When this option is specified, ssh-agent will not fork and will write debug information to standard error. -E fingerprint_hash Specifies the hash algorithm used when displaying key fingerprints. Valid options are: “md5” and “sha256”. The default is “sha256”. -k Kill the current agent (given by the SSH_AGENT_PID environment variable). -O option Specify an option when starting ssh-agent. Currently only one option is supported: no-restrict-websafe. This instructs ssh-agent to permit signatures using FIDO keys that might be web authentication requests. By default, ssh-agent refuses signature requests for FIDO keys where the key application string does not start with “ssh:” and when the data to be signed does not appear to be a ssh(1) user authentication request or a ssh-keygen(1) signature. The default behaviour prevents forwarded access to a FIDO key from also implicitly forwarding the ability to authenticate to websites. -P allowed_providers Specify a pattern-list of acceptable paths for PKCS#11 provider and FIDO authenticator middleware shared libraries that may be used with the -S or -s options to ssh-add(1). Libraries that do not match the pattern list will be refused. See PATTERNS in ssh_config(5) for a description of pattern-list syntax. The default list is “/usr/lib/*,/usr/local/lib/*”. -s Generate Bourne shell commands on stdout. This is the default if SHELL does not look like it's a csh style of shell. -t life Set a default value for the maximum lifetime of identities added to the agent. The lifetime may be specified in seconds or in a time format specified in sshd_config(5). A lifetime specified for an identity with ssh-add(1) overrides this value. Without this option the default maximum lifetime is forever. command [arg ...] If a command (and optional arguments) is given, this is executed as a subprocess of the agent. The agent exits automatically when the command given on the command line terminates. There are two main ways to get an agent set up. The first is at the start of an X session, where all other windows or programs are started as children of the ssh-agent program. The agent starts a command under which its environment variables are exported, for example ssh-agent xterm &. When the command terminates, so does the agent. The second method is used for a login session. When ssh-agent is started, it prints the shell commands required to set its environment variables, which in turn can be evaluated in the calling shell, for example eval `ssh-agent -s`. In both cases, ssh(1) looks at these environment variables and uses them to establish a connection to the agent. The agent initially does not have any private keys. Keys are added using ssh-add(1) or by ssh(1) when AddKeysToAgent is set in ssh_config(5). Multiple identities may be stored in ssh-agent concurrently and ssh(1) will automatically use them if present. ssh-add(1) is also used to remove keys from ssh-agent and to query the keys that are held in one. Connections to ssh-agent may be forwarded from further remote hosts using the -A option to ssh(1) (but see the caveats documented therein), avoiding the need for authentication data to be stored on other machines. Authentication passphrases and private keys never go over the network: the connection to the agent is forwarded over SSH remote connections and the result is returned to the requester, allowing the user access to their identities anywhere in the network in a secure fashion. | # ssh-agent
> Spawn an SSH Agent process. An SSH Agent holds SSH keys decrypted in memory
> until removed or the process is killed. See also `ssh-add`, which can add
> and manage keys held by an SSH Agent. More information:
> https://man.openbsd.org/ssh-agent.
* Start an SSH Agent for the current shell:
`eval $(ssh-agent)`
* Kill the currently running agent:
`ssh-agent -k` |
basenc | basenc encode or decode FILE, or standard input, to standard output. With no FILE, or when FILE is -, read standard input. Mandatory arguments to long options are mandatory for short options too. --base64 same as 'base64' program (RFC4648 section 4) --base64url file- and url-safe base64 (RFC4648 section 5) --base32 same as 'base32' program (RFC4648 section 6) --base32hex extended hex alphabet base32 (RFC4648 section 7) --base16 hex encoding (RFC4648 section 8) --base2msbf bit string with most significant bit (msb) first --base2lsbf bit string with least significant bit (lsb) first -d, --decode decode data -i, --ignore-garbage when decoding, ignore non-alphabet characters -w, --wrap=COLS wrap encoded lines after COLS character (default 76). Use 0 to disable line wrapping --z85 ascii85-like encoding (ZeroMQ spec:32/Z85); when encoding, input length must be a multiple of 4; when decoding, input length must be a multiple of 5 --help display this help and exit --version output version information and exit When decoding, the input may contain newlines in addition to the bytes of the formal alphabet. Use --ignore-garbage to attempt to recover from any other non-alphabet bytes in the encoded stream. | # basenc
> Encode or decode file or `stdin` using a specified encoding, to `stdout`.
> More information: https://www.gnu.org/software/coreutils/basenc.
* Encode a file with base64 encoding:
`basenc --base64 {{path/to/file}}`
* Decode a file with base64 encoding:
`basenc --decode --base64 {{path/to/file}}`
* Encode from `stdin` with base32 encoding with 42 columns:
`{{command}} | basenc --base32 -w42`
* Encode from `stdin` with base32 encoding:
`{{command}} | basenc --base32` |
locale | The locale command displays information about the current locale, or all locales, on standard output. When invoked without arguments, locale displays the current locale settings for each locale category (see locale(5)), based on the settings of the environment variables that control the locale (see locale(7)). Values for variables set in the environment are printed without double quotes, implied values are printed with double quotes. If either the -a or the -m option (or one of their long-format equivalents) is specified, the behavior is as follows: -a, --all-locales Display a list of all available locales. The -v option causes the LC_IDENTIFICATION metadata about each locale to be included in the output. -m, --charmaps Display the available charmaps (character set description files). To display the current character set for the locale, use locale -c charmap. The locale command can also be provided with one or more arguments, which are the names of locale keywords (for example, date_fmt, ctype-class-names, yesexpr, or decimal_point) or locale categories (for example, LC_CTYPE or LC_TIME). For each argument, the following is displayed: • For a locale keyword, the value of that keyword to be displayed. • For a locale category, the values of all keywords in that category are displayed. When arguments are supplied, the following options are meaningful: -c, --category-name For a category name argument, write the name of the locale category on a separate line preceding the list of keyword values for that category. For a keyword name argument, write the name of the locale category for this keyword on a separate line preceding the keyword value. This option improves readability when multiple name arguments are specified. It can be combined with the -k option. -k, --keyword-name For each keyword whose value is being displayed, include also the name of that keyword, so that the output has the format: keyword="value" The locale command also knows about the following options: -v, --verbose Display additional information for some command-line option and argument combinations. -?, --help Display a summary of command-line options and arguments and exit. --usage Display a short usage message and exit. -V, --version Display the program version and exit. | # locale
> Get locale-specific information. More information:
> https://manned.org/locale.
* List all global environment variables describing the user's locale:
`locale`
* List all available locales:
`locale --all-locales`
* Display all available locales and the associated metadata:
`locale --all-locales --verbose`
* Display the current date format:
`locale date_fmt` |
unlink | The unlink utility shall perform the function call: unlink(file); A user may need appropriate privileges to invoke the unlink utility. None. | # unlink
> Remove a link to a file from the filesystem. The file contents is lost if
> the link is the last one to the file. More information:
> https://www.gnu.org/software/coreutils/unlink.
* Remove the specified file if it is the last link:
`unlink {{path/to/file}}` |
diff3 | Compare three files line by line. Mandatory arguments to long options are mandatory for short options too. -A, --show-all output all changes, bracketing conflicts -e, --ed output ed script incorporating changes from OLDFILE to YOURFILE into MYFILE -E, --show-overlap like -e, but bracket conflicts -3, --easy-only like -e, but incorporate only nonoverlapping changes -x, --overlap-only like -e, but incorporate only overlapping changes -X like -x, but bracket conflicts -i append 'w' and 'q' commands to ed scripts -m, --merge output actual merged file, according to -A if no other options are given -a, --text treat all files as text --strip-trailing-cr strip trailing carriage return on input -T, --initial-tab make tabs line up by prepending a tab --diff-program=PROGRAM use PROGRAM to compare files -L, --label=LABEL use LABEL instead of file name (can be repeated up to three times) --help display this help and exit -v, --version output version information and exit The default output format is a somewhat human-readable representation of the changes. The -e, -E, -x, -X (and corresponding long) options cause an ed script to be output instead of the default. Finally, the -m (--merge) option causes diff3 to do the merge internally and output the actual merged file. For unusual input, this is more robust than using ed. If a FILE is '-', read standard input. Exit status is 0 if successful, 1 if conflicts, 2 if trouble. | # diff3
> Compare three files line by line. More information:
> https://www.gnu.org/software/diffutils/manual/html_node/Invoking-diff3.html.
* Compare files:
`diff3 {{path/to/file1}} {{path/to/file2}} {{path/to/file3}}`
* Show all changes, outlining conflicts:
`diff3 --show-all {{path/to/file1}} {{path/to/file2}} {{path/to/file3}}` |
gpasswd | The gpasswd command is used to administer /etc/group, and /etc/gshadow. Every group can have administrators, members and a password. System administrators can use the -A option to define group administrator(s) and the -M option to define members. They have all rights of group administrators and members. gpasswd called by a group administrator with a group name only prompts for the new password of the group. If a password is set the members can still use newgrp(1) without a password, and non-members must supply the password. Notes about group passwords Group passwords are an inherent security problem since more than one person is permitted to know the password. However, groups are a useful tool for permitting co-operation between different users. Except for the -A and -M options, the options cannot be combined. The options which apply to the gpasswd command are: -a, --add user Add the user to the named group. -d, --delete user Remove the user from the named group. -h, --help Display help message and exit. -Q, --root CHROOT_DIR Apply changes in the CHROOT_DIR directory and use the configuration files from the CHROOT_DIR directory. Only absolute paths are supported. -r, --remove-password Remove the password from the named group. The group password will be empty. Only group members will be allowed to use newgrp to join the named group. -R, --restrict Restrict the access to the named group. The group password is set to "!". Only group members with a password will be allowed to use newgrp to join the named group. -A, --administrators user,... Set the list of administrative users. -M, --members user,... Set the list of group members. | # gpasswd
> Administer `/etc/group` and `/etc/gshadow`. More information:
> https://manned.org/gpasswd.
* Define group administrators:
`sudo gpasswd -A {{user1,user2}} {{group}}`
* Set the list of group members:
`sudo gpasswd -M {{user1,user2}} {{group}}`
* Create a password for the named group:
`gpasswd {{group}}`
* Add a user to the named group:
`gpasswd -a {{user}} {{group}}`
* Remove a user from the named group:
`gpasswd -d {{user}} {{group}}` |
htop | htop is a cross-platform ncurses-based process viewer. It is similar to top, but allows you to scroll vertically and horizontally, and interact using a pointing device (mouse). You can observe all processes running on the system, along with their command line arguments, as well as view them in a tree format, select multiple processes and act on them all at once. Tasks related to processes (killing, renicing) can be done without entering their PIDs. pcp-htop is a version of htop built using the Performance Co- Pilot (PCP) Metrics API (see PCPIntro(1), PMAPI(3)), allowing to extend htop to display values from arbitrary metrics. See the section below titled CONFIG FILES for further details. | # htop
> Display dynamic real-time information about running processes. An enhanced
> version of `top`. More information: https://htop.dev/.
* Start `htop`:
`htop`
* Start `htop` displaying processes owned by a specific user:
`htop --user {{username}}`
* Sort processes by a specified `sort_item` (use `htop --sort help` for available options):
`htop --sort {{sort_item}}`
* See interactive commands while running htop:
`?`
* Switch to a different tab:
`tab`
* Display help:
`htop --help` |
git-show | Shows one or more objects (blobs, trees, tags and commits). For commits it shows the log message and textual diff. It also presents the merge commit in a special format as produced by git diff-tree --cc. For tags, it shows the tag message and the referenced objects. For trees, it shows the names (equivalent to git ls-tree with --name-only). For plain blobs, it shows the plain contents. The command takes options applicable to the git diff-tree command to control how the changes the commit introduces are shown. This manual page describes only the most frequently used options. <object>... The names of objects to show (defaults to HEAD). For a more complete list of ways to spell object names, see "SPECIFYING REVISIONS" section in gitrevisions(7). --pretty[=<format>], --format=<format> Pretty-print the contents of the commit logs in a given format, where <format> can be one of oneline, short, medium, full, fuller, reference, email, raw, format:<string> and tformat:<string>. When <format> is none of the above, and has %placeholder in it, it acts as if --pretty=tformat:<format> were given. See the "PRETTY FORMATS" section for some additional details for each format. When =<format> part is omitted, it defaults to medium. Note: you can specify the default pretty format in the repository configuration (see git-config(1)). --abbrev-commit Instead of showing the full 40-byte hexadecimal commit object name, show a prefix that names the object uniquely. "--abbrev=<n>" (which also modifies diff output, if it is displayed) option can be used to specify the minimum length of the prefix. This should make "--pretty=oneline" a whole lot more readable for people using 80-column terminals. --no-abbrev-commit Show the full 40-byte hexadecimal commit object name. This negates --abbrev-commit, either explicit or implied by other options such as "--oneline". It also overrides the log.abbrevCommit variable. --oneline This is a shorthand for "--pretty=oneline --abbrev-commit" used together. --encoding=<encoding> Commit objects record the character encoding used for the log message in their encoding header; this option can be used to tell the command to re-code the commit log message in the encoding preferred by the user. For non plumbing commands this defaults to UTF-8. Note that if an object claims to be encoded in X and we are outputting in X, we will output the object verbatim; this means that invalid sequences in the original commit may be copied to the output. Likewise, if iconv(3) fails to convert the commit, we will quietly output the original object verbatim. --expand-tabs=<n>, --expand-tabs, --no-expand-tabs Perform a tab expansion (replace each tab with enough spaces to fill to the next display column that is multiple of <n>) in the log message before showing it in the output. --expand-tabs is a short-hand for --expand-tabs=8, and --no-expand-tabs is a short-hand for --expand-tabs=0, which disables tab expansion. By default, tabs are expanded in pretty formats that indent the log message by 4 spaces (i.e. medium, which is the default, full, and fuller). --notes[=<ref>] Show the notes (see git-notes(1)) that annotate the commit, when showing the commit log message. This is the default for git log, git show and git whatchanged commands when there is no --pretty, --format, or --oneline option given on the command line. By default, the notes shown are from the notes refs listed in the core.notesRef and notes.displayRef variables (or corresponding environment overrides). See git-config(1) for more details. With an optional <ref> argument, use the ref to find the notes to display. The ref can specify the full refname when it begins with refs/notes/; when it begins with notes/, refs/ and otherwise refs/notes/ is prefixed to form a full name of the ref. Multiple --notes options can be combined to control which notes are being displayed. Examples: "--notes=foo" will show only notes from "refs/notes/foo"; "--notes=foo --notes" will show both notes from "refs/notes/foo" and from the default notes ref(s). --no-notes Do not show notes. This negates the above --notes option, by resetting the list of notes refs from which notes are shown. Options are parsed in the order given on the command line, so e.g. "--notes --notes=foo --no-notes --notes=bar" will only show notes from "refs/notes/bar". --show-notes[=<ref>], --[no-]standard-notes These options are deprecated. Use the above --notes/--no-notes options instead. --show-signature Check the validity of a signed commit object by passing the signature to gpg --verify and show the output. | # git show
> Show various types of Git objects (commits, tags, etc.). More information:
> https://git-scm.com/docs/git-show.
* Show information about the latest commit (hash, message, changes, and other metadata):
`git show`
* Show information about a given commit:
`git show {{commit}}`
* Show information about the commit associated with a given tag:
`git show {{tag}}`
* Show information about the 3rd commit from the HEAD of a branch:
`git show {{branch}}~{{3}}`
* Show a commit's message in a single line, suppressing the diff output:
`git show --oneline -s {{commit}}`
* Show only statistics (added/removed characters) about the changed files:
`git show --stat {{commit}}`
* Show only the list of added, renamed or deleted files:
`git show --summary {{commit}}`
* Show the contents of a file as it was at a given revision (e.g. branch, tag or commit):
`git show {{revision}}:{{path/to/file}}` |
tar | GNU tar is an archiving program designed to store multiple files in a single file (an archive), and to manipulate such archives. The archive can be either a regular file or a device (e.g. a tape drive, hence the name of the program, which stands for tape archiver), which can be located either on the local or on a remote machine. Option styles Options to GNU tar can be given in three different styles. In traditional style, the first argument is a cluster of option letters and all subsequent arguments supply arguments to those options that require them. The arguments are read in the same order as the option letters. Any command line words that remain after all options has been processed are treated as non-optional arguments: file or archive member names. For example, the c option requires creating the archive, the v option requests the verbose operation, and the f option takes an argument that sets the name of the archive to operate upon. The following command, written in the traditional style, instructs tar to store all files from the directory /etc into the archive file etc.tar verbosely listing the files being archived: tar cfv etc.tar /etc In UNIX or short-option style, each option letter is prefixed with a single dash, as in other command line utilities. If an option takes argument, the argument follows it, either as a separate command line word, or immediately following the option. However, if the option takes an optional argument, the argument must follow the option letter without any intervening whitespace, as in -g/tmp/snar.db. Any number of options not taking arguments can be clustered together after a single dash, e.g. -vkp. Options that take arguments (whether mandatory or optional), can appear at the end of such a cluster, e.g. -vkpf a.tar. The example command above written in the short-option style could look like: tar -cvf etc.tar /etc or tar -c -v -f etc.tar /etc In GNU or long-option style, each option begins with two dashes and has a meaningful name, consisting of lower-case letters and dashes. When used, the long option can be abbreviated to its initial letters, provided that this does not create ambiguity. Arguments to long options are supplied either as a separate command line word, immediately following the option, or separated from the option by an equals sign with no intervening whitespace. Optional arguments must always use the latter method. Here are several ways of writing the example command in this style: tar --create --file etc.tar --verbose /etc or (abbreviating some options): tar --cre --file=etc.tar --verb /etc The options in all three styles can be intermixed, although doing so with old options is not encouraged. Operation mode The options listed in the table below tell GNU tar what operation it is to perform. Exactly one of them must be given. Meaning of non-optional arguments depends on the operation mode requested. -A, --catenate, --concatenate Append archive to the end of another archive. The arguments are treated as the names of archives to append. All archives must be of the same format as the archive they are appended to, otherwise the resulting archive might be unusable with non-GNU implementations of tar. Notice also that when more than one archive is given, the members from archives other than the first one will be accessible in the resulting archive only if using the -i (--ignore-zeros) option. Compressed archives cannot be concatenated. -c, --create Create a new archive. Arguments supply the names of the files to be archived. Directories are archived recursively, unless the --no-recursion option is given. -d, --diff, --compare Find differences between archive and file system. The arguments are optional and specify archive members to compare. If not given, the current working directory is assumed. --delete Delete from the archive. The arguments supply names of the archive members to be removed. At least one argument must be given. This option does not operate on compressed archives. There is no short option equivalent. -r, --append Append files to the end of an archive. Arguments have the same meaning as for -c (--create). -t, --list List the contents of an archive. Arguments are optional. When given, they specify the names of the members to list. --test-label Test the archive volume label and exit. When used without arguments, it prints the volume label (if any) and exits with status 0. When one or more command line arguments are given. tar compares the volume label with each argument. It exits with code 0 if a match is found, and with code 1 otherwise. No output is displayed, unless used together with the -v (--verbose) option. There is no short option equivalent for this option. -u, --update Append files which are newer than the corresponding copy in the archive. Arguments have the same meaning as with -c and -r options. Notice, that newer files don't replace their old archive copies, but instead are appended to the end of archive. The resulting archive can thus contain several members of the same name, corresponding to various versions of the same file. -x, --extract, --get Extract files from an archive. Arguments are optional. When given, they specify names of the archive members to be extracted. --show-defaults Show built-in defaults for various tar options and exit. No arguments are allowed. -?, --help Display a short option summary and exit. No arguments allowed. --usage Display a list of available options and exit. No arguments allowed. --version Print program version and copyright information and exit. Operation modifiers --check-device Check device numbers when creating incremental archives (default). -g, --listed-incremental=FILE Handle new GNU-format incremental backups. FILE is the name of a snapshot file, where tar stores additional information which is used to decide which files changed since the previous incremental dump and, consequently, must be dumped again. If FILE does not exist when creating an archive, it will be created and all files will be added to the resulting archive (the level 0 dump). To create incremental archives of non-zero level N, create a copy of the snapshot file created during the level N-1, and use it as FILE. When listing or extracting, the actual contents of FILE is not inspected, it is needed only due to syntactical requirements. It is therefore common practice to use /dev/null in its place. --hole-detection=METHOD Use METHOD to detect holes in sparse files. This option implies --sparse. Valid values for METHOD are seek and raw. Default is seek with fallback to raw when not applicable. -G, --incremental Handle old GNU-format incremental backups. --ignore-failed-read Do not exit with nonzero on unreadable files. --level=NUMBER Set dump level for created listed-incremental archive. Currently only --level=0 is meaningful: it instructs tar to truncate the snapshot file before dumping, thereby forcing a level 0 dump. -n, --seek Assume the archive is seekable. Normally tar determines automatically whether the archive can be seeked or not. This option is intended for use in cases when such recognition fails. It takes effect only if the archive is open for reading (e.g. with --list or --extract options). --no-check-device Do not check device numbers when creating incremental archives. --no-seek Assume the archive is not seekable. --occurrence[=N] Process only the Nth occurrence of each file in the archive. This option is valid only when used with one of the following subcommands: --delete, --diff, --extract or --list and when a list of files is given either on the command line or via the -T option. The default N is 1. --restrict Disable the use of some potentially harmful options. --sparse-version=MAJOR[.MINOR] Set version of the sparse format to use (implies --sparse). This option implies --sparse. Valid argument values are 0.0, 0.1, and 1.0. For a detailed discussion of sparse formats, refer to the GNU Tar Manual, appendix D, "Sparse Formats". Using info reader, it can be accessed running the following command: info tar 'Sparse Formats'. -S, --sparse Handle sparse files efficiently. Some files in the file system may have segments which were actually never written (quite often these are database files created by such systems as DBM). When given this option, tar attempts to determine if the file is sparse prior to archiving it, and if so, to reduce the resulting archive size by not dumping empty parts of the file. Overwrite control These options control tar actions when extracting a file over an existing copy on disk. -k, --keep-old-files Don't replace existing files when extracting. --keep-newer-files Don't replace existing files that are newer than their archive copies. --keep-directory-symlink Don't replace existing symlinks to directories when extracting. --no-overwrite-dir Preserve metadata of existing directories. --one-top-level[=DIR] Extract all files into DIR, or, if used without argument, into a subdirectory named by the base name of the archive (minus standard compression suffixes recognizable by --auto-compress). --overwrite Overwrite existing files when extracting. --overwrite-dir Overwrite metadata of existing directories when extracting (default). --recursive-unlink Recursively remove all files in the directory prior to extracting it. --remove-files Remove files from disk after adding them to the archive. --skip-old-files Don't replace existing files when extracting, silently skip over them. -U, --unlink-first Remove each file prior to extracting over it. -W, --verify Verify the archive after writing it. Output stream selection --ignore-command-error Ignore subprocess exit codes. --no-ignore-command-error Treat non-zero exit codes of children as error (default). -O, --to-stdout Extract files to standard output. --to-command=COMMAND Pipe extracted files to COMMAND. The argument is the pathname of an external program, optionally with command line arguments. The program will be invoked and the contents of the file being extracted supplied to it on its standard input. Additional data will be supplied via the following environment variables: TAR_FILETYPE Type of the file. It is a single letter with the following meaning: f Regular file d Directory l Symbolic link h Hard link b Block device c Character device Currently only regular files are supported. TAR_MODE File mode, an octal number. TAR_FILENAME The name of the file. TAR_REALNAME Name of the file as stored in the archive. TAR_UNAME Name of the file owner. TAR_GNAME Name of the file owner group. TAR_ATIME Time of last access. It is a decimal number, representing seconds since the Epoch. If the archive provides times with nanosecond precision, the nanoseconds are appended to the timestamp after a decimal point. TAR_MTIME Time of last modification. TAR_CTIME Time of last status change. TAR_SIZE Size of the file. TAR_UID UID of the file owner. TAR_GID GID of the file owner. Additionally, the following variables contain information about tar operation mode and the archive being processed: TAR_VERSION GNU tar version number. TAR_ARCHIVE The name of the archive tar is processing. TAR_BLOCKING_FACTOR Current blocking factor, i.e. number of 512-byte blocks in a record. TAR_VOLUME Ordinal number of the volume tar is processing (set if reading a multi-volume archive). TAR_FORMAT Format of the archive being processed. One of: gnu, oldgnu, posix, ustar, v7. TAR_SUBCOMMAND A short option (with a leading dash) describing the operation tar is executing. Handling of file attributes --atime-preserve[=METHOD] Preserve access times on dumped files, either by restoring the times after reading (METHOD=replace, this is the default) or by not setting the times in the first place (METHOD=system) --delay-directory-restore Delay setting modification times and permissions of extracted directories until the end of extraction. Use this option when extracting from an archive which has unusual member ordering. --group=NAME[:GID] Force NAME as group for added files. If GID is not supplied, NAME can be either a user name or numeric GID. In this case the missing part (GID or name) will be inferred from the current host's group database. When used with --group-map=FILE, affects only those files whose owner group is not listed in FILE. --group-map=FILE Read group translation map from FILE. Empty lines are ignored. Comments are introduced with # sign and extend to the end of line. Each non-empty line in FILE defines translation for a single group. It must consist of two fields, delimited by any amount of whitespace: OLDGRP NEWGRP[:NEWGID] OLDGRP is either a valid group name or a GID prefixed with +. Unless NEWGID is supplied, NEWGRP must also be either a valid group name or a +GID. Otherwise, both NEWGRP and NEWGID need not be listed in the system group database. As a result, each input file with owner group OLDGRP will be stored in archive with owner group NEWGRP and GID NEWGID. --mode=CHANGES Force symbolic mode CHANGES for added files. --mtime=DATE-OR-FILE Set mtime for added files. DATE-OR-FILE is either a date/time in almost arbitrary format, or the name of an existing file. In the latter case the mtime of that file will be used. -m, --touch Don't extract file modified time. --no-delay-directory-restore Cancel the effect of the prior --delay-directory-restore option. --no-same-owner Extract files as yourself (default for ordinary users). --no-same-permissions Apply the user's umask when extracting permissions from the archive (default for ordinary users). --numeric-owner Always use numbers for user/group names. --owner=NAME[:UID] Force NAME as owner for added files. If UID is not supplied, NAME can be either a user name or numeric UID. In this case the missing part (UID or name) will be inferred from the current host's user database. When used with --owner-map=FILE, affects only those files whose owner is not listed in FILE. --owner-map=FILE Read owner translation map from FILE. Empty lines are ignored. Comments are introduced with # sign and extend to the end of line. Each non-empty line in FILE defines translation for a single UID. It must consist of two fields, delimited by any amount of whitespace: OLDUSR NEWUSR[:NEWUID] OLDUSR is either a valid user name or a UID prefixed with +. Unless NEWUID is supplied, NEWUSR must also be either a valid user name or a +UID. Otherwise, both NEWUSR and NEWUID need not be listed in the system user database. As a result, each input file owned by OLDUSR will be stored in archive with owner name NEWUSR and UID NEWUID. -p, --preserve-permissions, --same-permissions extract information about file permissions (default for superuser) --same-owner Try extracting files with the same ownership as exists in the archive (default for superuser). -s, --preserve-order, --same-order Sort names to extract to match archive --sort=ORDER When creating an archive, sort directory entries according to ORDER, which is one of none, name, or inode. The default is --sort=none, which stores archive members in the same order as returned by the operating system. Using --sort=name ensures the member ordering in the created archive is uniform and reproducible. Using --sort=inode reduces the number of disk seeks made when creating the archive and thus can considerably speed up archivation. This sorting order is supported only if the underlying system provides the necessary information. Extended file attributes --acls Enable POSIX ACLs support. --no-acls Disable POSIX ACLs support. --selinux Enable SELinux context support. --no-selinux Disable SELinux context support. --xattrs Enable extended attributes support. --no-xattrs Disable extended attributes support. --xattrs-exclude=PATTERN Specify the exclude pattern for xattr keys. PATTERN is a globbing pattern, e.g. --xattrs-exclude='user.*' to include only attributes from the user namespace. --xattrs-include=PATTERN Specify the include pattern for xattr keys. PATTERN is a globbing pattern. Device selection and switching -f, --file=ARCHIVE Use archive file or device ARCHIVE. If this option is not given, tar will first examine the environment variable `TAPE'. If it is set, its value will be used as the archive name. Otherwise, tar will assume the compiled-in default. The default value can be inspected either using the --show-defaults option, or at the end of the tar --help output. An archive name that has a colon in it specifies a file or device on a remote machine. The part before the colon is taken as the machine name or IP address, and the part after it as the file or device pathname, e.g.: --file=remotehost:/dev/sr0 An optional username can be prefixed to the hostname, placing a @ sign between them. By default, the remote host is accessed via the rsh(1) command. Nowadays it is common to use ssh(1) instead. You can do so by giving the following command line option: --rsh-command=/usr/bin/ssh The remote machine should have the rmt(8) command installed. If its pathname does not match tar's default, you can inform tar about the correct pathname using the --rmt-command option. --force-local Archive file is local even if it has a colon. -F, --info-script=COMMAND, --new-volume-script=COMMAND Run COMMAND at the end of each tape (implies -M). The command can include arguments. When started, it will inherit tar's environment plus the following variables: TAR_VERSION GNU tar version number. TAR_ARCHIVE The name of the archive tar is processing. TAR_BLOCKING_FACTOR Current blocking factor, i.e. number of 512-byte blocks in a record. TAR_VOLUME Ordinal number of the volume tar is processing (set if reading a multi-volume archive). TAR_FORMAT Format of the archive being processed. One of: gnu, oldgnu, posix, ustar, v7. TAR_SUBCOMMAND A short option (with a leading dash) describing the operation tar is executing. TAR_FD File descriptor which can be used to communicate the new volume name to tar. If the info script fails, tar exits; otherwise, it begins writing the next volume. -L, --tape-length=N Change tape after writing Nx1024 bytes. If N is followed by a size suffix (see the subsection Size suffixes below), the suffix specifies the multiplicative factor to be used instead of 1024. This option implies -M. -M, --multi-volume Create/list/extract multi-volume archive. --rmt-command=COMMAND Use COMMAND instead of rmt when accessing remote archives. See the description of the -f option, above. --rsh-command=COMMAND Use COMMAND instead of rsh when accessing remote archives. See the description of the -f option, above. --volno-file=FILE When this option is used in conjunction with --multi-volume, tar will keep track of which volume of a multi-volume archive it is working in FILE. Device blocking -b, --blocking-factor=BLOCKS Set record size to BLOCKSx512 bytes. -B, --read-full-records When listing or extracting, accept incomplete input records after end-of-file marker. -i, --ignore-zeros Ignore zeroed blocks in archive. Normally two consecutive 512-blocks filled with zeroes mean EOF and tar stops reading after encountering them. This option instructs it to read further and is useful when reading archives created with the -A option. --record-size=NUMBER Set record size. NUMBER is the number of bytes per record. It must be multiple of 512. It can can be suffixed with a size suffix, e.g. --record-size=10K, for 10 Kilobytes. See the subsection Size suffixes, for a list of valid suffixes. Archive format selection -H, --format=FORMAT Create archive of the given format. Valid formats are: gnu GNU tar 1.13.x format oldgnu GNU format as per tar <= 1.12. pax, posix POSIX 1003.1-2001 (pax) format. ustar POSIX 1003.1-1988 (ustar) format. v7 Old V7 tar format. --old-archive, --portability Same as --format=v7. --pax-option=keyword[[:]=value][,keyword[[:]=value]]... Control pax keywords when creating PAX archives (-H pax). This option is equivalent to the -o option of the pax(1) utility. --posix Same as --format=posix. -V, --label=TEXT Create archive with volume name TEXT. If listing or extracting, use TEXT as a globbing pattern for volume name. Compression options -a, --auto-compress Use archive suffix to determine the compression program. -I, --use-compress-program=COMMAND Filter data through COMMAND. It must accept the -d option, for decompression. The argument can contain command line options. -j, --bzip2 Filter the archive through bzip2(1). -J, --xz Filter the archive through xz(1). --lzip Filter the archive through lzip(1). --lzma Filter the archive through lzma(1). --lzop Filter the archive through lzop(1). --no-auto-compress Do not use archive suffix to determine the compression program. -z, --gzip, --gunzip, --ungzip Filter the archive through gzip(1). -Z, --compress, --uncompress Filter the archive through compress(1). --zstd Filter the archive through zstd(1). Local file selection --add-file=FILE Add FILE to the archive (useful if its name starts with a dash). --backup[=CONTROL] Backup before removal. The CONTROL argument, if supplied, controls the backup policy. Its valid values are: none, off Never make backups. t, numbered Make numbered backups. nil, existing Make numbered backups if numbered backups exist, simple backups otherwise. never, simple Always make simple backups If CONTROL is not given, the value is taken from the VERSION_CONTROL environment variable. If it is not set, existing is assumed. -C, --directory=DIR Change to DIR before performing any operations. This option is order-sensitive, i.e. it affects all options that follow. --exclude=PATTERN Exclude files matching PATTERN, a glob(3)-style wildcard pattern. --exclude-backups Exclude backup and lock files. --exclude-caches Exclude contents of directories containing file CACHEDIR.TAG, except for the tag file itself. --exclude-caches-all Exclude directories containing file CACHEDIR.TAG and the file itself. --exclude-caches-under Exclude everything under directories containing CACHEDIR.TAG --exclude-ignore=FILE Before dumping a directory, see if it contains FILE. If so, read exclusion patterns from this file. The patterns affect only the directory itself. --exclude-ignore-recursive=FILE Same as --exclude-ignore, except that patterns from FILE affect both the directory and all its subdirectories. --exclude-tag=FILE Exclude contents of directories containing FILE, except for FILE itself. --exclude-tag-all=FILE Exclude directories containing FILE. --exclude-tag-under=FILE Exclude everything under directories containing FILE. --exclude-vcs Exclude version control system directories. --exclude-vcs-ignores Exclude files that match patterns read from VCS-specific ignore files. Supported files are: .cvsignore, .gitignore, .bzrignore, and .hgignore. -h, --dereference Follow symlinks; archive and dump the files they point to. --hard-dereference Follow hard links; archive and dump the files they refer to. -K, --starting-file=MEMBER Begin at the given member in the archive. --newer-mtime=DATE Work on files whose data changed after the DATE. If DATE starts with / or . it is taken to be a file name; the mtime of that file is used as the date. --no-null Disable the effect of the previous --null option. --no-recursion Avoid descending automatically in directories. --no-unquote Do not unquote input file or member names. --no-verbatim-files-from Treat each line read from a file list as if it were supplied in the command line. I.e., leading and trailing whitespace is removed and, if the resulting string begins with a dash, it is treated as tar command line option. This is the default behavior. The --no-verbatim-files-from option is provided as a way to restore it after --verbatim-files-from option. This option is positional: it affects all --files-from options that occur after it in, until --verbatim-files-from option or end of line, whichever occurs first. It is implied by the --no-null option. --null Instruct subsequent -T options to read null-terminated names verbatim (disables special handling of names that start with a dash). See also --verbatim-files-from. -N, --newer=DATE, --after-date=DATE Only store files newer than DATE. If DATE starts with / or . it is taken to be a file name; the mtime of that file is used as the date. --one-file-system Stay in local file system when creating archive. -P, --absolute-names Don't strip leading slashes from file names when creating archives. --recursion Recurse into directories (default). --suffix=STRING Backup before removal, override usual suffix. Default suffix is ~, unless overridden by environment variable SIMPLE_BACKUP_SUFFIX. -T, --files-from=FILE Get names to extract or create from FILE. Unless specified otherwise, the FILE must contain a list of names separated by ASCII LF (i.e. one name per line). The names read are handled the same way as command line arguments. They undergo quote removal and word splitting, and any string that starts with a - is handled as tar command line option. If this behavior is undesirable, it can be turned off using the --verbatim-files-from option. The --null option instructs tar that the names in FILE are separated by ASCII NUL character, instead of LF. It is useful if the list is generated by find(1) -print0 predicate. --unquote Unquote file or member names (default). --verbatim-files-from Treat each line obtained from a file list as a file name, even if it starts with a dash. File lists are supplied with the --files-from (-T) option. The default behavior is to handle names supplied in file lists as if they were typed in the command line, i.e. any names starting with a dash are treated as tar options. The --verbatim-files-from option disables this behavior. This option affects all --files-from options that occur after it in the command line. Its effect is reverted by the --no-verbatim-files-from} option. This option is implied by the --null option. See also --add-file. -X, --exclude-from=FILE Exclude files matching patterns listed in FILE. File name transformations --strip-components=NUMBER Strip NUMBER leading components from file names on extraction. --transform=EXPRESSION, --xform=EXPRESSION Use sed replace EXPRESSION to transform file names. File name matching options These options affect both exclude and include patterns. --anchored Patterns match file name start. --ignore-case Ignore case. --no-anchored Patterns match after any / (default for exclusion). --no-ignore-case Case sensitive matching (default). --no-wildcards Verbatim string matching. --no-wildcards-match-slash Wildcards do not match /. --wildcards Use wildcards (default for exclusion). --wildcards-match-slash Wildcards match / (default for exclusion). Informative output --checkpoint[=N] Display progress messages every Nth record (default 10). --checkpoint-action=ACTION Run ACTION on each checkpoint. --clamp-mtime Only set time when the file is more recent than what was given with --mtime. --full-time Print file time to its full resolution. --index-file=FILE Send verbose output to FILE. -l, --check-links Print a message if not all links are dumped. --no-quote-chars=STRING Disable quoting for characters from STRING. --quote-chars=STRING Additionally quote characters from STRING. --quoting-style=STYLE Set quoting style for file and member names. Valid values for STYLE are literal, shell, shell-always, c, c-maybe, escape, locale, clocale. -R, --block-number Show block number within archive with each message. --show-omitted-dirs When listing or extracting, list each directory that does not match search criteria. --show-transformed-names, --show-stored-names Show file or archive names after transformation by --strip and --transform options. --totals[=SIGNAL] Print total bytes after processing the archive. If SIGNAL is given, print total bytes when this signal is delivered. Allowed signals are: SIGHUP, SIGQUIT, SIGINT, SIGUSR1, and SIGUSR2. The SIG prefix can be omitted. --utc Print file modification times in UTC. -v, --verbose Verbosely list files processed. Each instance of this option on the command line increases the verbosity level by one. The maximum verbosity level is 3. For a detailed discussion of how various verbosity levels affect tar's output, please refer to GNU Tar Manual, subsection 2.5.1 "The --verbose Option". --warning=KEYWORD Enable or disable warning messages identified by KEYWORD. The messages are suppressed if KEYWORD is prefixed with no- and enabled otherwise. Multiple --warning messages accumulate. Keywords controlling general tar operation: all Enable all warning messages. This is the default. none Disable all warning messages. filename-with-nuls "%s: file name read contains nul character" alone-zero-block "A lone zero block at %s" Keywords applicable for tar --create: cachedir "%s: contains a cache directory tag %s; %s" file-shrank "%s: File shrank by %s bytes; padding with zeros" xdev "%s: file is on a different filesystem; not dumped" file-ignored "%s: Unknown file type; file ignored" "%s: socket ignored" "%s: door ignored" file-unchanged "%s: file is unchanged; not dumped" ignore-archive "%s: archive cannot contain itself; not dumped" file-removed "%s: File removed before we read it" file-changed "%s: file changed as we read it" failed-read Suppresses warnings about unreadable files or directories. This keyword applies only if used together with the --ignore-failed-read option. Keywords applicable for tar --extract: existing-file "%s: skipping existing file" timestamp "%s: implausibly old time stamp %s" "%s: time stamp %s is %s s in the future" contiguous-cast "Extracting contiguous files as regular files" symlink-cast "Attempting extraction of symbolic links as hard links" unknown-cast "%s: Unknown file type '%c', extracted as normal file" ignore-newer "Current %s is newer or same age" unknown-keyword "Ignoring unknown extended header keyword '%s'" decompress-program Controls verbose description of failures occurring when trying to run alternative decompressor programs. This warning is disabled by default (unless --verbose is used). A common example of what you can get when using this warning is: $ tar --warning=decompress-program -x -f archive.Z tar (child): cannot run compress: No such file or directory tar (child): trying gzip This means that tar first tried to decompress archive.Z using compress, and, when that failed, switched to gzip. record-size "Record size = %lu blocks" Keywords controlling incremental extraction: rename-directory "%s: Directory has been renamed from %s" "%s: Directory has been renamed" new-directory "%s: Directory is new" xdev "%s: directory is on a different device: not purging" bad-dumpdir "Malformed dumpdir: 'X' never used" -w, --interactive, --confirmation Ask for confirmation for every action. Compatibility options -o When creating, same as --old-archive. When extracting, same as --no-same-owner. Size suffixes Suffix Units Byte Equivalent b Blocks SIZE x 512 B Kilobytes SIZE x 1024 c Bytes SIZE G Gigabytes SIZE x 1024^3 K Kilobytes SIZE x 1024 k Kilobytes SIZE x 1024 M Megabytes SIZE x 1024^2 P Petabytes SIZE x 1024^5 T Terabytes SIZE x 1024^4 w Words SIZE x 2 | # tar
> Archiving utility. Often combined with a compression method, such as gzip or
> bzip2. More information: https://www.gnu.org/software/tar.
* [c]reate an archive and write it to a [f]ile:
`tar cf {{path/to/target.tar}} {{path/to/file1 path/to/file2 ...}}`
* [c]reate a g[z]ipped archive and write it to a [f]ile:
`tar czf {{path/to/target.tar.gz}} {{path/to/file1 path/to/file2 ...}}`
* [c]reate a g[z]ipped archive from a directory using relative paths:
`tar czf {{path/to/target.tar.gz}} --directory={{path/to/directory}} .`
* E[x]tract a (compressed) archive [f]ile into the current directory [v]erbosely:
`tar xvf {{path/to/source.tar[.gz|.bz2|.xz]}}`
* E[x]tract a (compressed) archive [f]ile into the target directory:
`tar xf {{path/to/source.tar[.gz|.bz2|.xz]}}
--directory={{path/to/directory}}`
* [c]reate a compressed archive and write it to a [f]ile, using [a]rchive suffix to determine the compression program:
`tar caf {{path/to/target.tar.xz}} {{path/to/file1 path/to/file2 ...}}`
* Lis[t] the contents of a tar [f]ile [v]erbosely:
`tar tvf {{path/to/source.tar}}`
* E[x]tract files matching a pattern from an archive [f]ile:
`tar xf {{path/to/source.tar}} --wildcards "{{*.html}}"` |
pr | Paginate or columnate FILE(s) for printing. With no FILE, or when FILE is -, read standard input. Mandatory arguments to long options are mandatory for short options too. +FIRST_PAGE[:LAST_PAGE], --pages=FIRST_PAGE[:LAST_PAGE] begin [stop] printing with page FIRST_[LAST_]PAGE -COLUMN, --columns=COLUMN output COLUMN columns and print columns down, unless -a is used. Balance number of lines in the columns on each page -a, --across print columns across rather than down, used together with -COLUMN -c, --show-control-chars use hat notation (^G) and octal backslash notation -d, --double-space double space the output -D, --date-format=FORMAT use FORMAT for the header date -e[CHAR[WIDTH]], --expand-tabs[=CHAR[WIDTH]] expand input CHARs (TABs) to tab WIDTH (8) -F, -f, --form-feed use form feeds instead of newlines to separate pages (by a 3-line page header with -F or a 5-line header and trailer without -F) -h, --header=HEADER use a centered HEADER instead of filename in page header, -h "" prints a blank line, don't use -h"" -i[CHAR[WIDTH]], --output-tabs[=CHAR[WIDTH]] replace spaces with CHARs (TABs) to tab WIDTH (8) -J, --join-lines merge full lines, turns off -W line truncation, no column alignment, --sep-string[=STRING] sets separators -l, --length=PAGE_LENGTH set the page length to PAGE_LENGTH (66) lines (default number of lines of text 56, and with -F 63). implies -t if PAGE_LENGTH <= 10 -m, --merge print all files in parallel, one in each column, truncate lines, but join lines of full length with -J -n[SEP[DIGITS]], --number-lines[=SEP[DIGITS]] number lines, use DIGITS (5) digits, then SEP (TAB), default counting starts with 1st line of input file -N, --first-line-number=NUMBER start counting with NUMBER at 1st line of first page printed (see +FIRST_PAGE) -o, --indent=MARGIN offset each line with MARGIN (zero) spaces, do not affect -w or -W, MARGIN will be added to PAGE_WIDTH -r, --no-file-warnings omit warning when a file cannot be opened -s[CHAR], --separator[=CHAR] separate columns by a single character, default for CHAR is the <TAB> character without -w and 'no char' with -w. -s[CHAR] turns off line truncation of all 3 column options (-COLUMN|-a -COLUMN|-m) except -w is set -S[STRING], --sep-string[=STRING] separate columns by STRING, without -S: Default separator <TAB> with -J and <space> otherwise (same as -S" "), no effect on column options -t, --omit-header omit page headers and trailers; implied if PAGE_LENGTH <= 10 -T, --omit-pagination omit page headers and trailers, eliminate any pagination by form feeds set in input files -v, --show-nonprinting use octal backslash notation -w, --width=PAGE_WIDTH set page width to PAGE_WIDTH (72) characters for multiple text-column output only, -s[char] turns off (72) -W, --page-width=PAGE_WIDTH set page width to PAGE_WIDTH (72) characters always, truncate lines, except -J option is set, no interference with -S or -s --help display this help and exit --version output version information and exit | # pr
> Paginate or columnate files for printing. More information:
> https://www.gnu.org/software/coreutils/pr.
* Print multiple files with a default header and footer:
`pr {{file1}} {{file2}} {{file3}}`
* Print with a custom centered header:
`pr -h "{{header}}" {{file1}} {{file2}} {{file3}}`
* Print with numbered lines and a custom date format:
`pr -n -D "{{format}}" {{file1}} {{file2}} {{file3}}`
* Print all files together, one in each column, without a header or footer:
`pr -m -T {{file1}} {{file2}} {{file3}}`
* Print, beginning at page 2 up to page 5, with a given page length (including header and footer):
`pr +{{2}}:{{5}} -l {{page_length}} {{file1}} {{file2}} {{file3}}`
* Print with an offset for each line and a truncating custom page width:
`pr -o {{offset}} -W {{width}} {{file1}} {{file2}} {{file3}}` |
git-restore | Restore specified paths in the working tree with some contents from a restore source. If a path is tracked but does not exist in the restore source, it will be removed to match the source. The command can also be used to restore the content in the index with --staged, or restore both the working tree and the index with --staged --worktree. By default, if --staged is given, the contents are restored from HEAD, otherwise from the index. Use --source to restore from a different commit. See "Reset, restore and revert" in git(1) for the differences between the three commands. THIS COMMAND IS EXPERIMENTAL. THE BEHAVIOR MAY CHANGE. -s <tree>, --source=<tree> Restore the working tree files with the content from the given tree. It is common to specify the source tree by naming a commit, branch or tag associated with it. If not specified, the contents are restored from HEAD if --staged is given, otherwise from the index. As a special case, you may use "A...B" as a shortcut for the merge base of A and B if there is exactly one merge base. You can leave out at most one of A and B, in which case it defaults to HEAD. -p, --patch Interactively select hunks in the difference between the restore source and the restore location. See the “Interactive Mode” section of git-add(1) to learn how to operate the --patch mode. Note that --patch can accept no pathspec and will prompt to restore all modified paths. -W, --worktree, -S, --staged Specify the restore location. If neither option is specified, by default the working tree is restored. Specifying --staged will only restore the index. Specifying both restores both. -q, --quiet Quiet, suppress feedback messages. Implies --no-progress. --progress, --no-progress Progress status is reported on the standard error stream by default when it is attached to a terminal, unless --quiet is specified. This flag enables progress reporting even if not attached to a terminal, regardless of --quiet. --ours, --theirs When restoring files in the working tree from the index, use stage #2 (ours) or #3 (theirs) for unmerged paths. Note that during git rebase and git pull --rebase, ours and theirs may appear swapped. See the explanation of the same options in git-checkout(1) for details. -m, --merge When restoring files on the working tree from the index, recreate the conflicted merge in the unmerged paths. --conflict=<style> The same as --merge option above, but changes the way the conflicting hunks are presented, overriding the merge.conflictStyle configuration variable. Possible values are "merge" (default), "diff3", and "zdiff3". --ignore-unmerged When restoring files on the working tree from the index, do not abort the operation if there are unmerged entries and neither --ours, --theirs, --merge or --conflict is specified. Unmerged paths on the working tree are left alone. --ignore-skip-worktree-bits In sparse checkout mode, by default is to only update entries matched by <pathspec> and sparse patterns in $GIT_DIR/info/sparse-checkout. This option ignores the sparse patterns and unconditionally restores any files in <pathspec>. --recurse-submodules, --no-recurse-submodules If <pathspec> names an active submodule and the restore location includes the working tree, the submodule will only be updated if this option is given, in which case its working tree will be restored to the commit recorded in the superproject, and any local modifications overwritten. If nothing (or --no-recurse-submodules) is used, submodules working trees will not be updated. Just like git-checkout(1), this will detach HEAD of the submodule. --overlay, --no-overlay In overlay mode, the command never removes files when restoring. In no-overlay mode, tracked files that do not appear in the --source tree are removed, to make them match <tree> exactly. The default is no-overlay mode. --pathspec-from-file=<file> Pathspec is passed in <file> instead of commandline args. If <file> is exactly - then standard input is used. Pathspec elements are separated by LF or CR/LF. Pathspec elements can be quoted as explained for the configuration variable core.quotePath (see git-config(1)). See also --pathspec-file-nul and global --literal-pathspecs. --pathspec-file-nul Only meaningful with --pathspec-from-file. Pathspec elements are separated with NUL character and all other characters are taken literally (including newlines and quotes). -- Do not interpret any more arguments as options. <pathspec>... Limits the paths affected by the operation. For more details, see the pathspec entry in gitglossary(7). | # git restore
> Restore working tree files. Requires Git version 2.23+. See also `git
> checkout` and `git reset`. More information: https://git-scm.com/docs/git-
> restore.
* Restore an unstaged file to the version of the current commit (HEAD):
`git restore {{path/to/file}}`
* Restore an unstaged file to the version of a specific commit:
`git restore --source {{commit}} {{path/to/file}}`
* Discard all unstaged changes to tracked files:
`git restore :/`
* Unstage a file:
`git restore --staged {{path/to/file}}`
* Unstage all files:
`git restore --staged :/`
* Discard all changes to files, both staged and unstaged:
`git restore --worktree --staged :/`
* Interactively select sections of files to restore:
`git restore --patch` |
git-archive | Creates an archive of the specified format containing the tree structure for the named tree, and writes it out to the standard output. If <prefix> is specified it is prepended to the filenames in the archive. git archive behaves differently when given a tree ID versus when given a commit ID or tag ID. In the first case the current time is used as the modification time of each file in the archive. In the latter case the commit time as recorded in the referenced commit object is used instead. Additionally the commit ID is stored in a global extended pax header if the tar format is used; it can be extracted using git get-tar-commit-id. In ZIP files it is stored as a file comment. --format=<fmt> Format of the resulting archive. Possible values are tar, zip, tar.gz, tgz, and any format defined using the configuration option tar.<format>.command. If --format is not given, and the output file is specified, the format is inferred from the filename if possible (e.g. writing to foo.zip makes the output to be in the zip format). Otherwise the output format is tar. -l, --list Show all available formats. -v, --verbose Report progress to stderr. --prefix=<prefix>/ Prepend <prefix>/ to paths in the archive. Can be repeated; its rightmost value is used for all tracked files. See below which value gets used by --add-file and --add-virtual-file. -o <file>, --output=<file> Write the archive to <file> instead of stdout. --add-file=<file> Add a non-tracked file to the archive. Can be repeated to add multiple files. The path of the file in the archive is built by concatenating the value of the last --prefix option (if any) before this --add-file and the basename of <file>. --add-virtual-file=<path>:<content> Add the specified contents to the archive. Can be repeated to add multiple files. The path of the file in the archive is built by concatenating the value of the last --prefix option (if any) before this --add-virtual-file and <path>. The <path> argument can start and end with a literal double-quote character; the contained file name is interpreted as a C-style string, i.e. the backslash is interpreted as escape character. The path must be quoted if it contains a colon, to avoid the colon from being misinterpreted as the separator between the path and the contents, or if the path begins or ends with a double-quote character. The file mode is limited to a regular file, and the option may be subject to platform-dependent command-line limits. For non-trivial cases, write an untracked file and use --add-file instead. --worktree-attributes Look for attributes in .gitattributes files in the working tree as well (see the section called “ATTRIBUTES”). --mtime=<time> Set modification time of archive entries. Without this option the committer time is used if <tree-ish> is a commit or tag, and the current time if it is a tree. <extra> This can be any options that the archiver backend understands. See next section. --remote=<repo> Instead of making a tar archive from the local repository, retrieve a tar archive from a remote repository. Note that the remote repository may place restrictions on which sha1 expressions may be allowed in <tree-ish>. See git-upload-archive(1) for details. --exec=<git-upload-archive> Used with --remote to specify the path to the git-upload-archive on the remote side. <tree-ish> The tree or commit to produce an archive for. <path> Without an optional path parameter, all files and subdirectories of the current working directory are included in the archive. If one or more paths are specified, only these are included. | # git archive
> Create an archive of files from a named tree. More information: https://git-
> scm.com/docs/git-archive.
* Create a tar archive from the contents of the current HEAD and print it to `stdout`:
`git archive --verbose HEAD`
* Create a zip archive from the current HEAD and print it to `stdout`:
`git archive --verbose --format=zip HEAD`
* Same as above, but write the zip archive to file:
`git archive --verbose --output={{path/to/file.zip}} HEAD`
* Create a tar archive from the contents of the latest commit on a specific branch:
`git archive --output={{path/to/file.tar}} {{branch_name}}`
* Create a tar archive from the contents of a specific directory:
`git archive --output={{path/to/file.tar}} HEAD:{{path/to/directory}}`
* Prepend a path to each file to archive it inside a specific directory:
`git archive --output={{path/to/file.tar}} --prefix={{path/to/prepend}}/ HEAD` |
uname | Print certain system information. With no OPTION, same as -s. -a, --all print all information, in the following order, except omit -p and -i if unknown: -s, --kernel-name print the kernel name -n, --nodename print the network node hostname -r, --kernel-release print the kernel release -v, --kernel-version print the kernel version -m, --machine print the machine hardware name -p, --processor print the processor type (non-portable) -i, --hardware-platform print the hardware platform (non-portable) -o, --operating-system print the operating system --help display this help and exit --version output version information and exit | # uname
> Print details about the current machine and the operating system running on
> it. Note: for additional information about the operating system, try the
> `sw_vers` command. More information: https://ss64.com/osx/uname.html.
* Print kernel name:
`uname`
* Print system architecture and processor information:
`uname -mp`
* Print kernel name, kernel release and kernel version:
`uname -srv`
* Print system hostname:
`uname -n`
* Print all available system information:
`uname -a` |
tee | Copy standard input to each FILE, and also to standard output. -a, --append append to the given FILEs, do not overwrite -i, --ignore-interrupts ignore interrupt signals -p operate in a more appropriate MODE with pipes. --output-error[=MODE] set behavior on write error. See MODE below --help display this help and exit --version output version information and exit MODE determines behavior with write errors on the outputs: warn diagnose errors writing to any output warn-nopipe diagnose errors writing to any output not a pipe exit exit on error writing to any output exit-nopipe exit on error writing to any output not a pipe The default MODE for the -p option is 'warn-nopipe'. With "nopipe" MODEs, exit immediately if all outputs become broken pipes. The default operation when --output-error is not specified, is to exit immediately on error writing to a pipe, and diagnose errors writing to non pipe outputs. | # tee
> Read from `stdin` and write to `stdout` and files (or commands). More
> information: https://www.gnu.org/software/coreutils/tee.
* Copy `stdin` to each file, and also to `stdout`:
`echo "example" | tee {{path/to/file}}`
* Append to the given files, do not overwrite:
`echo "example" | tee -a {{path/to/file}}`
* Print `stdin` to the terminal, and also pipe it into another program for further processing:
`echo "example" | tee {{/dev/tty}} | {{xargs printf "[%s]"}}`
* Create a directory called "example", count the number of characters in "example" and write "example" to the terminal:
`echo "example" | tee >(xargs mkdir) >(wc -c)` |
join | For each pair of input lines with identical join fields, write a line to standard output. The default join field is the first, delimited by blanks. When FILE1 or FILE2 (not both) is -, read standard input. -a FILENUM also print unpairable lines from file FILENUM, where FILENUM is 1 or 2, corresponding to FILE1 or FILE2 -e STRING replace missing (empty) input fields with STRING; I.e., missing fields specified with '-12jo' options -i, --ignore-case ignore differences in case when comparing fields -j FIELD equivalent to '-1 FIELD -2 FIELD' -o FORMAT obey FORMAT while constructing output line -t CHAR use CHAR as input and output field separator -v FILENUM like -a FILENUM, but suppress joined output lines -1 FIELD join on this FIELD of file 1 -2 FIELD join on this FIELD of file 2 --check-order check that the input is correctly sorted, even if all input lines are pairable --nocheck-order do not check that the input is correctly sorted --header treat the first line in each file as field headers, print them without trying to pair them -z, --zero-terminated line delimiter is NUL, not newline --help display this help and exit --version output version information and exit Unless -t CHAR is given, leading blanks separate fields and are ignored, else fields are separated by CHAR. Any FIELD is a field number counted from 1. FORMAT is one or more comma or blank separated specifications, each being 'FILENUM.FIELD' or '0'. Default FORMAT outputs the join field, the remaining fields from FILE1, the remaining fields from FILE2, all separated by CHAR. If FORMAT is the keyword 'auto', then the first line of each file determines the number of fields output for each line. Important: FILE1 and FILE2 must be sorted on the join fields. E.g., use "sort -k 1b,1" if 'join' has no options, or use "join -t ''" if 'sort' has no options. Note, comparisons honor the rules specified by 'LC_COLLATE'. If the input is not sorted and some lines cannot be joined, a warning message will be given. | # join
> Join lines of two sorted files on a common field. More information:
> https://www.gnu.org/software/coreutils/join.
* Join two files on the first (default) field:
`join {{file1}} {{file2}}`
* Join two files using a comma (instead of a space) as the field separator:
`join -t {{','}} {{file1}} {{file2}}`
* Join field3 of file1 with field1 of file2:
`join -1 {{3}} -2 {{1}} {{file1}} {{file2}}`
* Produce a line for each unpairable line for file1:
`join -a {{1}} {{file1}} {{file2}}`
* Join a file from `stdin`:
`cat {{path/to/file1}} | join - {{path/to/file2}}` |
pidof | Pidof finds the process id's (pids) of the named programs. It prints those id's on the standard output. -s Single shot - this instructs the program to only return one pid. -c Only return process ids that are running with the same root directory. This option is ignored for non-root users, as they will be unable to check the current root directory of processes they do not own. -q Quiet mode, suppress any output and only sets the exit status accordingly. -w Show also processes that do not have visible command line (e.g. kernel worker threads). -x Scripts too - this causes the program to also return process id's of shells running the named scripts. -o omitpid Tells pidof to omit processes with that process id. The special pid %PPID can be used to name the parent process of the pidof program, in other words the calling shell or shell script. -S separator Use separator as a separator put between pids. Used only when more than one pids are printed for the program. The -d option is an alias for this option for sysvinit pidof compatibility. | # pidof
> Gets the ID of a process using its name. More information:
> https://manned.org/pidof.
* List all process IDs with given name:
`pidof {{bash}}`
* List a single process ID with given name:
`pidof -s {{bash}}`
* List process IDs including scripts with given name:
`pidof -x {{script.py}}`
* Kill all processes with given name:
`kill $(pidof {{name}})` |
wait | When an asynchronous list (see Section 2.9.3.1, Examples) is started by the shell, the process ID of the last command in each element of the asynchronous list shall become known in the current shell execution environment; see Section 2.12, Shell Execution Environment. If the wait utility is invoked with no operands, it shall wait until all process IDs known to the invoking shell have terminated and exit with a zero exit status. If one or more pid operands are specified that represent known process IDs, the wait utility shall wait until all of them have terminated. If one or more pid operands are specified that represent unknown process IDs, wait shall treat them as if they were known process IDs that exited with exit status 127. The exit status returned by the wait utility shall be the exit status of the process requested by the last pid operand. The known process IDs are applicable only for invocations of wait in the current shell execution environment. None. | # wait
> Wait for a process to complete before proceeding. More information:
> https://manned.org/wait.
* Wait for a process to finish given its process ID (PID) and return its exit status:
`wait {{pid}}`
* Wait for all processes known to the invoking shell to finish:
`wait` |
git-difftool | git difftool is a Git command that allows you to compare and edit files between revisions using common diff tools. git difftool is a frontend to git diff and accepts the same options and arguments. See git-diff(1). -d, --dir-diff Copy the modified files to a temporary location and perform a directory diff on them. This mode never prompts before launching the diff tool. -y, --no-prompt Do not prompt before launching a diff tool. --prompt Prompt before each invocation of the diff tool. This is the default behaviour; the option is provided to override any configuration settings. --rotate-to=<file> Start showing the diff for the given path, the paths before it will move to end and output. --skip-to=<file> Start showing the diff for the given path, skipping all the paths before it. -t <tool>, --tool=<tool> Use the diff tool specified by <tool>. Valid values include emerge, kompare, meld, and vimdiff. Run git difftool --tool-help for the list of valid <tool> settings. If a diff tool is not specified, git difftool will use the configuration variable diff.tool. If the configuration variable diff.tool is not set, git difftool will pick a suitable default. You can explicitly provide a full path to the tool by setting the configuration variable difftool.<tool>.path. For example, you can configure the absolute path to kdiff3 by setting difftool.kdiff3.path. Otherwise, git difftool assumes the tool is available in PATH. Instead of running one of the known diff tools, git difftool can be customized to run an alternative program by specifying the command line to invoke in a configuration variable difftool.<tool>.cmd. When git difftool is invoked with this tool (either through the -t or --tool option or the diff.tool configuration variable) the configured command line will be invoked with the following variables available: $LOCAL is set to the name of the temporary file containing the contents of the diff pre-image and $REMOTE is set to the name of the temporary file containing the contents of the diff post-image. $MERGED is the name of the file which is being compared. $BASE is provided for compatibility with custom merge tool commands and has the same value as $MERGED. --tool-help Print a list of diff tools that may be used with --tool. --[no-]symlinks git difftool's default behavior is create symlinks to the working tree when run in --dir-diff mode and the right-hand side of the comparison yields the same content as the file in the working tree. Specifying --no-symlinks instructs git difftool to create copies instead. --no-symlinks is the default on Windows. -x <command>, --extcmd=<command> Specify a custom command for viewing diffs. git-difftool ignores the configured defaults and runs $command $LOCAL $REMOTE when this option is specified. Additionally, $BASE is set in the environment. -g, --[no-]gui When git-difftool is invoked with the -g or --gui option the default diff tool will be read from the configured diff.guitool variable instead of diff.tool. This may be selected automatically using the configuration variable difftool.guiDefault. The --no-gui option can be used to override these settings. If diff.guitool is not set, we will fallback in the order of merge.guitool, diff.tool, merge.tool until a tool is found. --[no-]trust-exit-code git-difftool invokes a diff tool individually on each file. Errors reported by the diff tool are ignored by default. Use --trust-exit-code to make git-difftool exit when an invoked diff tool returns a non-zero exit code. git-difftool will forward the exit code of the invoked tool when --trust-exit-code is used. See git-diff(1) for the full list of supported options. | # git difftool
> Show file changes using external diff tools. Accepts the same options and
> arguments as `git diff`. See also: `git diff`. More information:
> https://git-scm.com/docs/git-difftool.
* List available diff tools:
`git difftool --tool-help`
* Set the default diff tool to meld:
`git config --global diff.tool "{{meld}}"`
* Use the default diff tool to show staged changes:
`git difftool --staged`
* Use a specific tool (opendiff) to show changes since a given commit:
`git difftool --tool={{opendiff}} {{commit}}` |
wc | Print newline, word, and byte counts for each FILE, and a total line if more than one FILE is specified. A word is a non-zero-length sequence of printable characters delimited by white space. With no FILE, or when FILE is -, read standard input. The options below may be used to select which counts are printed, always in the following order: newline, word, character, byte, maximum line length. -c, --bytes print the byte counts -m, --chars print the character counts -l, --lines print the newline counts --files0-from=F read input from the files specified by NUL-terminated names in file F; If F is - then read names from standard input -L, --max-line-length print the maximum display width -w, --words print the word counts --total=WHEN when to print a line with total counts; WHEN can be: auto, always, only, never --help display this help and exit --version output version information and exit | # wc
> Count lines, words, or bytes. More information:
> https://ss64.com/osx/wc.html.
* Count lines in file:
`wc -l {{path/to/file}}`
* Count words in file:
`wc -w {{path/to/file}}`
* Count characters (bytes) in file:
`wc -c {{path/to/file}}`
* Count characters in file (taking multi-byte character sets into account):
`wc -m {{path/to/file}}`
* Use `stdin` to count lines, words and characters (bytes) in that order:
`{{find .}} | wc` |
passwd | The passwd command changes passwords for user accounts. A normal user may only change the password for their own account, while the superuser may change the password for any account. passwd also changes the account or associated password validity period. Password Changes The user is first prompted for their old password, if one is present. This password is then encrypted and compared against the stored password. The user has only one chance to enter the correct password. The superuser is permitted to bypass this step so that forgotten passwords may be changed. After the password has been entered, password aging information is checked to see if the user is permitted to change the password at this time. If not, passwd refuses to change the password and exits. The user is then prompted twice for a replacement password. The second entry is compared against the first and both are required to match in order for the password to be changed. Then, the password is tested for complexity. passwd will reject any password which is not suitably complex. Care must be taken not to include the system default erase or kill characters. Hints for user passwords The security of a password depends upon the strength of the encryption algorithm and the size of the key space. The legacy UNIX System encryption method is based on the NBS DES algorithm. More recent methods are now recommended (see ENCRYPT_METHOD). The size of the key space depends upon the randomness of the password which is selected. Compromises in password security normally result from careless password selection or handling. For this reason, you should not select a password which appears in a dictionary or which must be written down. The password should also not be a proper name, your license number, birth date, or street address. Any of these may be used as guesses to violate system security. As a general guideline, passwords should be long and random. It's fine to use simple character sets, such as passwords consisting only of lowercase letters, if that helps memorizing longer passwords. For a password consisting only of lowercase English letters randomly chosen, and a length of 32, there are 26^32 (approximately 2^150) different possible combinations. Being an exponential equation, it's apparent that the exponent (the length) is more important than the base (the size of the character set). You can find advice on how to choose a strong password on http://en.wikipedia.org/wiki/Password_strength The options which apply to the passwd command are: -a, --all This option can be used only with -S and causes show status for all users. -d, --delete Delete a user's password (make it empty). This is a quick way to disable a password for an account. It will set the named account passwordless. -e, --expire Immediately expire an account's password. This in effect can force a user to change their password at the user's next login. -h, --help Display help message and exit. -i, --inactive INACTIVE This option is used to disable an account after the password has been expired for a number of days. After a user account has had an expired password for INACTIVE days, the user may no longer sign on to the account. -k, --keep-tokens Indicate password change should be performed only for expired authentication tokens (passwords). The user wishes to keep their non-expired tokens as before. -l, --lock Lock the password of the named account. This option disables a password by changing it to a value which matches no possible encrypted value (it adds a ´!´ at the beginning of the password). Note that this does not disable the account. The user may still be able to login using another authentication token (e.g. an SSH key). To disable the account, administrators should use usermod --expiredate 1 (this set the account's expire date to Jan 2, 1970). Users with a locked password are not allowed to change their password. -n, --mindays MIN_DAYS Set the minimum number of days between password changes to MIN_DAYS. A value of zero for this field indicates that the user may change their password at any time. -q, --quiet Quiet mode. -r, --repository REPOSITORY change password in REPOSITORY repository -R, --root CHROOT_DIR Apply changes in the CHROOT_DIR directory and use the configuration files from the CHROOT_DIR directory. Only absolute paths are supported. -S, --status Display account status information. The status information consists of 7 fields. The first field is the user's login name. The second field indicates if the user account has a locked password (L), has no password (NP), or has a usable password (P). The third field gives the date of the last password change. The next four fields are the minimum age, maximum age, warning period, and inactivity period for the password. These ages are expressed in days. -u, --unlock Unlock the password of the named account. This option re-enables a password by changing the password back to its previous value (to the value before using the -l option). -w, --warndays WARN_DAYS Set the number of days of warning before a password change is required. The WARN_DAYS option is the number of days prior to the password expiring that a user will be warned that their password is about to expire. -x, --maxdays MAX_DAYS Set the maximum number of days a password remains valid. After MAX_DAYS, the password is required to be changed. Passing the number -1 as MAX_DAYS will remove checking a password's validity. | # passwd
> Passwd is a tool used to change a user's password. More information:
> https://manned.org/passwd.
* Change the password of the current user interactively:
`passwd`
* Change the password of a specific user:
`passwd {{username}}`
* Get the current status of the user:
`passwd -S`
* Make the password of the account blank (it will set the named account passwordless):
`passwd -d` |
command | The command utility shall cause the shell to treat the arguments as a simple command, suppressing the shell function lookup that is described in Section 2.9.1.1, Command Search and Execution, item 1b. If the command_name is the same as the name of one of the special built-in utilities, the special properties in the enumerated list at the beginning of Section 2.14, Special Built-In Utilities shall not occur. In every other respect, if command_name is not the name of a function, the effect of command (with no options) shall be the same as omitting command. When the -v or -V option is used, the command utility shall provide information concerning how a command name is interpreted by the shell. The command utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following options shall be supported: -p Perform the command search using a default value for PATH that is guaranteed to find all of the standard utilities. -v Write a string to standard output that indicates the pathname or command that will be used by the shell, in the current shell execution environment (see Section 2.12, Shell Execution Environment), to invoke command_name, but do not invoke command_name. * Utilities, regular built-in utilities, command_names including a <slash> character, and any implementation-defined functions that are found using the PATH variable (as described in Section 2.9.1.1, Command Search and Execution), shall be written as absolute pathnames. * Shell functions, special built-in utilities, regular built-in utilities not associated with a PATH search, and shell reserved words shall be written as just their names. * An alias shall be written as a command line that represents its alias definition. * Otherwise, no output shall be written and the exit status shall reflect that the name was not found. -V Write a string to standard output that indicates how the name given in the command_name operand will be interpreted by the shell, in the current shell execution environment (see Section 2.12, Shell Execution Environment), but do not invoke command_name. Although the format of this string is unspecified, it shall indicate in which of the following categories command_name falls and shall include the information stated: * Utilities, regular built-in utilities, and any implementation-defined functions that are found using the PATH variable (as described in Section 2.9.1.1, Command Search and Execution), shall be identified as such and include the absolute pathname in the string. * Other shell functions shall be identified as functions. * Aliases shall be identified as aliases and their definitions included in the string. * Special built-in utilities shall be identified as special built-in utilities. * Regular built-in utilities not associated with a PATH search shall be identified as regular built-in utilities. (The term ``regular'' need not be used.) * Shell reserved words shall be identified as reserved words. | # command
> Command forces the shell to execute the program and ignore any functions,
> builtins and aliases with the same name. More information:
> https://manned.org/command.
* Execute the `ls` program literally, even if an `ls` alias exists:
`command {{ls}}`
* Display the path to the executable or the alias definition of a specific command:
`command -v {{command_name}}` |
getent | The getent command displays entries from databases supported by the Name Service Switch libraries, which are configured in /etc/nsswitch.conf. If one or more key arguments are provided, then only the entries that match the supplied keys will be displayed. Otherwise, if no key is provided, all entries will be displayed (unless the database does not support enumeration). The database may be any of those supported by the GNU C Library, listed below: ahosts When no key is provided, use sethostent(3), gethostent(3), and endhostent(3) to enumerate the hosts database. This is identical to using hosts. When one or more key arguments are provided, pass each key in succession to getaddrinfo(3) with the address family AF_UNSPEC, enumerating each socket address structure returned. ahostsv4 Same as ahosts, but use the address family AF_INET. ahostsv6 Same as ahosts, but use the address family AF_INET6. The call to getaddrinfo(3) in this case includes the AI_V4MAPPED flag. aliases When no key is provided, use setaliasent(3), getaliasent(3), and endaliasent(3) to enumerate the aliases database. When one or more key arguments are provided, pass each key in succession to getaliasbyname(3) and display the result. ethers When one or more key arguments are provided, pass each key in succession to ether_aton(3) and ether_hostton(3) until a result is obtained, and display the result. Enumeration is not supported on ethers, so a key must be provided. group When no key is provided, use setgrent(3), getgrent(3), and endgrent(3) to enumerate the group database. When one or more key arguments are provided, pass each numeric key to getgrgid(3) and each nonnumeric key to getgrnam(3) and display the result. gshadow When no key is provided, use setsgent(3), getsgent(3), and endsgent(3) to enumerate the gshadow database. When one or more key arguments are provided, pass each key in succession to getsgnam(3) and display the result. hosts When no key is provided, use sethostent(3), gethostent(3), and endhostent(3) to enumerate the hosts database. When one or more key arguments are provided, pass each key to gethostbyaddr(3) or gethostbyname2(3), depending on whether a call to inet_pton(3) indicates that the key is an IPv6 or IPv4 address or not, and display the result. initgroups When one or more key arguments are provided, pass each key in succession to getgrouplist(3) and display the result. Enumeration is not supported on initgroups, so a key must be provided. netgroup When one key is provided, pass the key to setnetgrent(3) and, using getnetgrent(3) display the resulting string triple (hostname, username, domainname). Alternatively, three keys may be provided, which are interpreted as the hostname, username, and domainname to match to a netgroup name via innetgr(3). Enumeration is not supported on netgroup, so either one or three keys must be provided. networks When no key is provided, use setnetent(3), getnetent(3), and endnetent(3) to enumerate the networks database. When one or more key arguments are provided, pass each numeric key to getnetbyaddr(3) and each nonnumeric key to getnetbyname(3) and display the result. passwd When no key is provided, use setpwent(3), getpwent(3), and endpwent(3) to enumerate the passwd database. When one or more key arguments are provided, pass each numeric key to getpwuid(3) and each nonnumeric key to getpwnam(3) and display the result. protocols When no key is provided, use setprotoent(3), getprotoent(3), and endprotoent(3) to enumerate the protocols database. When one or more key arguments are provided, pass each numeric key to getprotobynumber(3) and each nonnumeric key to getprotobyname(3) and display the result. rpc When no key is provided, use setrpcent(3), getrpcent(3), and endrpcent(3) to enumerate the rpc database. When one or more key arguments are provided, pass each numeric key to getrpcbynumber(3) and each nonnumeric key to getrpcbyname(3) and display the result. services When no key is provided, use setservent(3), getservent(3), and endservent(3) to enumerate the services database. When one or more key arguments are provided, pass each numeric key to getservbynumber(3) and each nonnumeric key to getservbyname(3) and display the result. shadow When no key is provided, use setspent(3), getspent(3), and endspent(3) to enumerate the shadow database. When one or more key arguments are provided, pass each key in succession to getspnam(3) and display the result. -s service, --service service Override all databases with the specified service. (Since glibc 2.2.5.) -s database:service, --service database:service Override only specified databases with the specified service. The option may be used multiple times, but only the last service for each database will be used. (Since glibc 2.4.) -i, --no-idn Disables IDN encoding in lookups for ahosts/getaddrinfo(3) (Since glibc-2.13.) -?, --help Print a usage summary and exit. --usage Print a short usage summary and exit. -V, --version Print the version number, license, and disclaimer of warranty for getent. | # getent
> Get entries from Name Service Switch libraries. More information:
> https://manned.org/getent.
* Get list of all groups:
`getent group`
* See the members of a group:
`getent group {{group_name}}`
* Get list of all services:
`getent services`
* Find a username by UID:
`getent passwd 1000`
* Perform a reverse DNS lookup:
`getent hosts {{host}}` |
dd | Copy a file, converting and formatting according to the operands. bs=BYTES read and write up to BYTES bytes at a time (default: 512); overrides ibs and obs cbs=BYTES convert BYTES bytes at a time conv=CONVS convert the file as per the comma separated symbol list count=N copy only N input blocks ibs=BYTES read up to BYTES bytes at a time (default: 512) if=FILE read from FILE instead of stdin iflag=FLAGS read as per the comma separated symbol list obs=BYTES write BYTES bytes at a time (default: 512) of=FILE write to FILE instead of stdout oflag=FLAGS write as per the comma separated symbol list seek=N (or oseek=N) skip N obs-sized output blocks skip=N (or iseek=N) skip N ibs-sized input blocks status=LEVEL The LEVEL of information to print to stderr; 'none' suppresses everything but error messages, 'noxfer' suppresses the final transfer statistics, 'progress' shows periodic transfer statistics N and BYTES may be followed by the following multiplicative suffixes: c=1, w=2, b=512, kB=1000, K=1024, MB=1000*1000, M=1024*1024, xM=M, GB=1000*1000*1000, G=1024*1024*1024, and so on for T, P, E, Z, Y, R, Q. Binary prefixes can be used, too: KiB=K, MiB=M, and so on. If N ends in 'B', it counts bytes not blocks. Each CONV symbol may be: ascii from EBCDIC to ASCII ebcdic from ASCII to EBCDIC ibm from ASCII to alternate EBCDIC block pad newline-terminated records with spaces to cbs-size unblock replace trailing spaces in cbs-size records with newline lcase change upper case to lower case ucase change lower case to upper case sparse try to seek rather than write all-NUL output blocks swab swap every pair of input bytes sync pad every input block with NULs to ibs-size; when used with block or unblock, pad with spaces rather than NULs excl fail if the output file already exists nocreat do not create the output file notrunc do not truncate the output file noerror continue after read errors fdatasync physically write output file data before finishing fsync likewise, but also write metadata Each FLAG symbol may be: append append mode (makes sense only for output; conv=notrunc suggested) direct use direct I/O for data directory fail unless a directory dsync use synchronized I/O for data sync likewise, but also for metadata fullblock accumulate full blocks of input (iflag only) nonblock use non-blocking I/O noatime do not update access time nocache Request to drop cache. See also oflag=sync noctty do not assign controlling terminal from file nofollow do not follow symlinks Sending a USR1 signal to a running 'dd' process makes it print I/O statistics to standard error and then resume copying. Options are: --help display this help and exit --version output version information and exit | # dd
> Convert and copy a file. More information: https://keith.github.io/xcode-
> man-pages/dd.1.html.
* Make a bootable USB drive from an isohybrid file (such like `archlinux-xxx.iso`) and show the progress:
`dd if={{path/to/file.iso}} of={{/dev/usb_device}} status=progress`
* Clone a drive to another drive with 4 MB block, ignore error and show the progress:
`dd if={{/dev/source_device}} of={{/dev/dest_device}} bs={{4m}}
conv={{noerror}} status=progress`
* Generate a file of 100 random bytes by using kernel random driver:
`dd if=/dev/urandom of={{path/to/random_file}} bs={{100}} count={{1}}`
* Benchmark the write performance of a disk:
`dd if=/dev/zero of={{path/to/1GB_file}} bs={{1024}} count={{1000000}}`
* Generate a system backup into an IMG file and show the progress:
`dd if=/dev/{{drive_device}} of={{path/to/file.img}} status=progress`
* Restore a drive from an IMG file and show the progress:
`dd if={{path/to/file.img}} of={{/dev/drive_device}} status=progress`
* Check the progress of an ongoing dd operation (run this command from another shell):
`kill -USR1 $(pgrep ^dd)` |
join | For each pair of input lines with identical join fields, write a line to standard output. The default join field is the first, delimited by blanks. When FILE1 or FILE2 (not both) is -, read standard input. -a FILENUM also print unpairable lines from file FILENUM, where FILENUM is 1 or 2, corresponding to FILE1 or FILE2 -e STRING replace missing (empty) input fields with STRING; I.e., missing fields specified with '-12jo' options -i, --ignore-case ignore differences in case when comparing fields -j FIELD equivalent to '-1 FIELD -2 FIELD' -o FORMAT obey FORMAT while constructing output line -t CHAR use CHAR as input and output field separator -v FILENUM like -a FILENUM, but suppress joined output lines -1 FIELD join on this FIELD of file 1 -2 FIELD join on this FIELD of file 2 --check-order check that the input is correctly sorted, even if all input lines are pairable --nocheck-order do not check that the input is correctly sorted --header treat the first line in each file as field headers, print them without trying to pair them -z, --zero-terminated line delimiter is NUL, not newline --help display this help and exit --version output version information and exit Unless -t CHAR is given, leading blanks separate fields and are ignored, else fields are separated by CHAR. Any FIELD is a field number counted from 1. FORMAT is one or more comma or blank separated specifications, each being 'FILENUM.FIELD' or '0'. Default FORMAT outputs the join field, the remaining fields from FILE1, the remaining fields from FILE2, all separated by CHAR. If FORMAT is the keyword 'auto', then the first line of each file determines the number of fields output for each line. Important: FILE1 and FILE2 must be sorted on the join fields. E.g., use "sort -k 1b,1" if 'join' has no options, or use "join -t ''" if 'sort' has no options. Note, comparisons honor the rules specified by 'LC_COLLATE'. If the input is not sorted and some lines cannot be joined, a warning message will be given. | # join
> Join lines of two sorted files on a common field. More information:
> https://www.gnu.org/software/coreutils/join.
* Join two files on the first (default) field:
`join {{file1}} {{file2}}`
* Join two files using a comma (instead of a space) as the field separator:
`join -t {{','}} {{file1}} {{file2}}`
* Join field3 of file1 with field1 of file2:
`join -1 {{3}} -2 {{1}} {{file1}} {{file2}}`
* Produce a line for each unpairable line for file1:
`join -a {{1}} {{file1}} {{file2}}`
* Join a file from `stdin`:
`cat {{path/to/file1}} | join - {{path/to/file2}}` |
bg | If job control is enabled (see the description of set -m), the bg utility shall resume suspended jobs from the current environment (see Section 2.12, Shell Execution Environment) by running them as background jobs. If the job specified by job_id is already a running background job, the bg utility shall have no effect and shall exit successfully. Using bg to place a job into the background shall cause its process ID to become ``known in the current shell execution environment'', as if it had been started as an asynchronous list; see Section 2.9.3.1, Examples. None. | # bg
> Resumes jobs that have been suspended (e.g. using `Ctrl + Z`), and keeps
> them running in the background. More information: https://manned.org/bg.
* Resume the most recently suspended job and run it in the background:
`bg`
* Resume a specific job (use `jobs -l` to get its ID) and run it in the background:
`bg %{{job_id}}` |